Theory of machine learning, deep learning in particular has been witnessing an implosion lately in deciphering the “black-box approaches”. Optimizing deep neural networks is largely thought to be an empirical process, requiring manual tuning of several parameters. Drawing insights in to these parameters gained much attention lately.
The conference aims to focus on gaining theoretical insights in the computation and setting of these parameters and solicits original work reflecting the influence of such theoretical framework on experimental results on standard datasets and architectures.
The conference aims to garner valuable talking points from optimization studies, another aspect of deep learning architectures and experiments. It is in this spirit, the organizers wish to bridge metaheuristic optimization methods with deep neural networks and solicit papers that focus on exploring alternatives to gradient descent/ascent types methods.
Papers with theoretical insights and proofs are particularly sought after, with or without limited experimental validation. We would welcome cutting-edge research on aspects of deep learning theory used in the fields of artificial intelligence, statistics and data science, theoretical and numerical optimization.
For more details: Click Here