“Ronald Fisher and Maximum Likelihood Estimation” is a presentation that I gave to the Bay Area Entrepreneurs in Statistics (BAES) on August 20, 2016 at Symation in Richmond, CA — just north of Berkeley. BAES is a Meetup Group for people interested in the application of statistics to entrepreneurial ventures.
A video and slides from the presentation are now online:
Maximum Likelihood Estimation (MLE) is one of the major foundational methods of parameter estimation and statistical inference. It is used in many fields from experimental particle physics, where it is regularly used to detect and measure the parameters of new particles such as the Higgs Particle at the Large Hadron Collider (LHC), to speech recognition where it forms the basis of the traditional Hidden Markov Model (HMM) based speech recognition algorithms used by the Carnegie Mellon University (CMU) Sphinx open source speech recognition engine and Nuance’s Dragon Naturally Speaking.
In addition to parameter estimation, MLE can be used for classification as is done in speech recognition. Is this utterance “I scream” or “ice cream” for example? This talk will discuss the origins of Maximum Likelihood Estimation in the pioneering work of Ronald Fisher, some history of the development and use of the method, and various practical and theoretical problems that bedevil this popular, powerful, but difficult to use technique including: vulnerability to outliers and the problem of robustness, practical problems with fitting multidimensional models, proper normalization of the models, and limited computer power.
The slides for this talk can also be found here:
NOTE: For privacy reasons, a still frame is overlaid on the lower left corner of the video. You may notice you can see through the shoulder of an audience member; this is why.