I am currently a Stanford data science scholar and a postdoctoral fellow in the Department of Statistics at Stanford working with David Donoho. I have designed and taught a new data science course at Stanford: Massive Computational Experiments, Painlessly (STATS285).
I am interested in various aspects of Data Science, in particular the role that massive computation will play in the future of science. My current activities involve deeper understanding of high-dimensional linear classification, deep learning, and sparse reconstruction problems. I am interested in accelarating applications such MRI, NMR spectroscopy and microscopy where the long time required for sampling is a great challenge. I also develop computer abstractions and tools to facilitate ambitious data science studies involving million-CPU-hour computational experiments. I have created ClusterJob, an experiment management system (EMS) that is currently being used by more than 100 researchers at Stanford on a daily basis. To see a list of my present and past projects, please check my research page.