The connectivity of the neuronal networks has a significant effect on their functionality and role. Until recently, the only noteworthy way of mapping the neuronal connectome was the direct approach, namely, to slice and observe the neural tissue. Nevertheless, direct approaches have proven to be very time consuming, complex and costly. As a result, inverse approaches that utilize firing activity of neurons to identify the (functional) connections have become more and more popular in recent years, especially in light of rapid advances in recording technologies which will soon make it possible to simultaneously monitor the activities of thousands of neurons.
While there are many excellent approaches to identify the functional connections from the firing activities, scalability of the proposed techniques are a major challenge in applying many existing algorithms to large datasets of firing activities. In certain exception where scalability is not an issue, the theoretical performance guarantees only apply to a specific family of neurons/firing activities. In this talk, I'll explain a new (re)formulation of the inverse inference problem that is easy to analyze and suitable for Machine Learning algorithms. The new reformulation has the additional advantage that it facilitates theoretical analysis. Therefore, we can derive the conditions under which the identified functional graph matches the underlying synaptic connections. Finally, I'll discuss the performance of the algorithm applied to a dataset of artificially generated spiking pattern that allows benchmarking.