Episodes

Reviews

Analytics

Clips

June 21, 2019
DSA Addis Ababa and ICML Los Angeles
In episode twelve of season five we bring you a rundown of Data Science Africa's latest workshop answer a listener question about what got us excited at ICML and hear the first part of our conversation with Michael Melese from Addis Ababa University and Charles Saidu of Baze University Abuja
More info...
55 min
June 6, 2019
Data Trusts and Citation Trends
In episode eleven of season five, we dig in to just what a data trust actually is, take a look at citation trends and other places (PMLR) you can dig up data to understand the field and talk with Raia Hadsell of DeepMind.
More info...
54 min
May 23, 2019
Reproducibly and Revisiting History
In episode ten of season five we talk about reproducibility, take a listener question on re understanding the history of the field given where we are now and how other fields are reviewing their own history and listen to a conversation with Graham Taylor of the Vector Institute.
More info...
46 min
May 10, 2019
Insights from AISTATS
In episode nine of season five we talk about some interesting work from AISTATS, dive into unbiased implicit variational inference, and chat with Jon McAuliffe CIO of Voleon
More info...
52 min
April 25, 2019
The Deep End of Deep Learning
In this episode as we prep for ICLR we take a break from our usual format to bring you a talk from Hugo LaRochelle at TedX Boston on Deep Learning.
More info...
19 min
April 11, 2019
Exploring MARS and Getting back to Bayesics
In episode seven of season five of we chat about MARS and Re: MARS OpenAI's status changes and We talk with Jasper Snoek of Google Brain
More info...
68 min
March 28, 2019
The Sweetness of a Bitter Lesson and Bringing ML and Healthcare Closer
In episode six of season five we talk about Richard Sutton's A Bitter Lesson. Chat about IEEE's new Ethical Guidelines and talk with Andrew Beam Senior Fellownn at Flagship Pioneering, Head of Machine Learning for Flagship VL57 and Assistant Professor, Department of Epidemiology, Harvard T.H. Chan School of Public Health. Here are some of the papers we got to chat about! Also, VL57 is hiring! Adversarial attacks on Medical ML Science paper Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L. and Kohane, I.S., 2019. Adversarial attacks on medical machine learning. Science, 363(6433), pp.1287-1289. Link: https://cyber.harvard.edu/story/2019-03/adversarial-attacks-medical-ai-health-policy-challenge   JAMA Papers Beam, A.L. and Kohane, I.S., 2016. Translating artificial intelligence into clinical care. Jama, 316(22), pp.2368-2369. Link: https://www.dropbox.com/s/4o1va07tqwvrxsn/Beam_TranslatingAI_2016.pdf?dl=0   Beam, A.L. and Kohane, I.S., 2018. Big data and machine learning in health care. Jama, 319(13), pp.1317-1318. Link: https://www.dropbox.com/s/q1cixzmsdugq3vy/Beam_BigData_ML.pdf?dl=0   Opportunities in machine learning for healthcare: Ghassemi, M., Naumann, T., Schulam, P., Beam, A.L. and Ranganath, R., 2018. Opportunities in machine learning for healthcare. arXiv preprint arXiv:1806.00388. Link: https://arxiv.org/abs/1806.00388
More info...
50 min
March 14, 2019
Slowed Down Conferences and Even More Summer Schools
In episode five of season five we talk about the Stu Hunter conference, Summer schools options (DLRLSS!) and chat with Adrian Weller of the Alan Turing Institute
More info...
43 min
February 28, 2019
Jupyter Notebooks and Modern Model Distribution
In episode four of season five we talk about Jupyter Notebooks and Neil's dream of a world craft software and devices, we take a listener question about the conversation surrounding Open AI's GPT-2 its announcement and the coverage and we hear an interview with Brooks Paige of the Alan Turing Instiute
More info...
36 min
February 15, 2019
Real World Real Time and Five Papers for Mike Tipping
In season five episode three we chat about take a listener question about Five Papers for Mike Tipping, take a listener question on AIAI and chat with Eoin O'Mahony of Uber Here are Neil's five papers. What are yours? Stochastic variational inference by Hoffman, Wang, Blei and Paisley http://arxiv.org/abs/1206.7051 A way of doing approximate inference for probabilistic models with potentially billions of data ... need I say more? Austerity in MCMC Land: Cutting the Metropolis Hastings by Korattikara, Chen and Welling http://arxiv.org/abs/1304.5299 Oh ... I do need to say more ... because these three are at it as well but from the sampling perspective. Probabilistic models for big data ... an idea so important it needed to be in the list twice.  Practical Bayesian Optimization of Machine Learning Algorithms by Snoek, Larochelle and Adams http://arxiv.org/abs/1206.2944 This paper represents the rise in probabilistic numerics, I could also have chosen papers by Osborne, Hennig or others. There are too many papers out there already. Definitely an exciting area, be it optimisation, integration, differential equations. I chose this paper because it seems to have blown the field open to a wider audience, focussing as it did on deep learning as an application, so it let's me capture both an area of developing interest and an area that hits the national news. Kernel Bayes Rule by Fukumizu, Song, Gretton http://arxiv.org/abs/1009.5736 One of the great things about ML is how we have different (and competing) philosophies operating under the same roof. But because we still talk to each other (and sometimes even listen to each other)  these ideas can merge to create new and interesting things. Kernel Bayes Rule makes the list. http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf An obvious choice, but you don't leave the Beatles off lists of great bands just because they are an obvious choice.
More info...
61 min
February 1, 2019
The Bezos Paradox and Machine Learning Languages
In episode two of season five we unpack the Bezos Paradox (TM Neil Lawrence) take a listener question about best papers and chat with Dougal Maclaurin of Google Brain.
More info...
41 min
January 17, 2019
Being Global Bit by Bit
In episode one of season five we talk about Bit by Bit, take a listener question on machine learning gatherings on the African continent (Deep Learning INDABA! DSA!) and hear an interview with Daphne Koller recorded at ODSC West
More info...
48 min
November 29, 2018
The Possibility Of Explanation and The End of Season Four
For the end of season four we take a break from our regular format and bring you a talk from Professor Finale Doshi Velez of Harvard University on the possibility of explanation Tune in next season!
More info...
18 min
November 16, 2018
Neural Information Processing Systems and Distributed Internal Intelligence Systems
In episode twenty one of season four we talk about distributed intelligence systems (mainly those internal to humans), talk about what were excited to see at the Conference on Neural Information Processing Systems and in advance of our trek to Canada we chat with Garth Gibson president and CEO of the Vector Institute.
More info...
36 min
November 1, 2018
Data Driven Ideas and Actionable Privacy
In episode twenty of season four we talk about the importance of crediting your data, answer a listener question about internships vs salaried positions and talk with Matt Kusner of the Alan Turing institute the UK’s national institute for data science and AI.
More info...
45 min
October 18, 2018
AI for Good and The Real World
In episode nineteen of season four we talk about causality in the real world, take a question about being surprised by the elephant in the room and talk with Kush Varshney of IBM.
More info...
32 min
October 5, 2018
Systems Design and Tools for Transparency
In episode 18 of season four we talk about systems design, (remember the 3 d's!), tools for transparency and fairness and we talk with Adria Gascon of The Alan Turing Institute, the UK’s national institute for data science and AI.
More info...
40 min
September 20, 2018
How to Research in Hype and CIFAR's Strategy
In episode 17 of season four we talk about how to research in a time of hype (and other lessons from Tom Griffiths book) Neil's love of variational methods, and with Chat with Elissa Strome director of the Pan-Canadian AI Strategy for CIFAR
More info...
37 min
September 7, 2018
Troubling Trends and Climbing Mountains
In this episode we talk about an article Troubling Trends in Machine learning Scholarship the difference between engineering and science (and the mountains you climb to span the distance) plus we talk with David Duvenaud of the University of Toronto
More info...
39 min
August 23, 2018
Gaussian Processes, Grad School, and Richard Zemel
More info...
43 min
August 9, 2018
Long Term Fairness
More info...
29 min
July 27, 2018
Simulated Learning and Real World Ethics
In episode thirteen of season four we chat about simulations, reinforcement learning, and Philippa Foot. We take a listener question about the update to the ACM code of ethics (first time since 1992!) and We talk with professor Mike Jordan.
More info...
57 min
July 12, 2018
ICML 2018 with Jennifer Dy
Season four episode twelve finds us at ICML! We bring you a special episode with Jennifer Dy, co-program chair of the conference.
More info...
19 min
June 28, 2018
Aspirational Asimov and How to Survive a Conference
In season four episode eleven we talk about the possibility of the NIPS conference changing its name, what to do at ICML, And we talk with Bernhard Schölkopf.
More info...
45 min
June 14, 2018
Explanations and Reviews
In episode 10 of season 4 we chat about Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR, take a listener question about how reviews of papers work at NIPS and we hear from Sven Strohband, CTO of Khosla Ventures.
More info...
23 min
May 31, 2018
Statements on Statements
In episode 9 of season 4 we talk about the Statement on Nature Machine Intelligence. We reached out to Nature for a statement on the statement and received the following: “At Springer Nature we are very clear in our mission to advance discovery and help researchers share their work. Having an extensive, and growing, open access portfolio is one important way we do this but it is important to remember that while open access has been around for 20 years now it still only accounts for a small percentage of overall global research output with demand for subscription content remaining high. This is because the move to open access is complex, and for many, simply not a viable option. Nature Machine Intelligence is a new subscription journal that aims to stimulate cross-disciplinary interactions, reach broad audiences and explore the impact that AI research has on other fields by publishing high-quality research, reviews and commentary on machine learning, robotics and AI. It involves substantial editorial development, offers high levels of author service and publishes informative, accessible content beyond primary research all of which requires considerable investment. At present, we believe that the fairest way of producing highly selective journals like this one and ensuring their long-term sustainability as a resource for the widest possible community, is to spread these costs among many readers — instead of having them borne by a few authors.     We also offer multiple open access options for AI authors. We already publish AI papers in Scientific Reports and Nature Communications, which are the largest open access journal in the world and the most cited open access journal respectively. We offer hybrid publishing options and are set to launch a new AI multidisciplinary, open access journal later this year. We help all researchers to freely share their discoveries by encouraging preprint posting and data- and code-sharing and continue to extend access to all Nature journals in various ways, including our free SharedIt content-sharing initiative, which provides authors and subscribers with shareable links to view-only versions of published papers.” We also get a chance to talk with Maithra Raghu from the Google Brain team about her work.
More info...
26 min
May 17, 2018
The Futility of Artificial Carpenters and Further Reading
In episode eight of season four we review some recently published articles by Michael Jordan and Rodney Brooks (for more reading along these lines, Tom Dettriech is a great person to follow), we recommend some further reading, and talk with Arthur Gretton who was part of the team behind one of the Best Papers at NIPS 2017 For more reading we recommend Machine Learning Yearning, Talking Nets, The Mechanical Mind in History, and Colossus.
More info...
37 min
May 3, 2018
Economies, Work and AI
In episode seven of season four we chat about Ellis and the UK AI Sector Deal , we take a listener question about the next AI winter and if/when it is coming, plus we hear from Christina Colclough Director of Platform and Agency Workers, Digitalization and Trade UNI Global Union.
More info...
42 min
April 19, 2018
Explainability and the Inexplicable
In episode six of season four we chat about AI and religion, we take a listener question about personal bias checking and we hear from Been Kim of Google Brain.
More info...
43 min
April 5, 2018
Good Data Practice Rules
In episode five of season four we talk about the GDPR or as we like to think of it Good Data Practice Rules. (If you actually read it, you move to expert level!) We take a listener question about the power of approximate inference, and we hear from our guest Andrew Blake of The Alan Turing Institute.
More info...
51 min
March 22, 2018
Can an AI Practitioner Fix a Radio?
In episode four of season four we talk more about natural an artificial intelligences and thinking about diversity in systems. Reading Can a Biologist Fix a Radio is a great paper around these ideas. We take a listener question about moving into machine learning after having advanced training in a different program. Our guest on this episode is our second second time guest Peter Donnelly, Professor of Statistical Science at the University of Oxford, Director of the Wellcome Trust Center for Human Genetics and a Fellow of the Royal Society.
More info...
44 min
March 8, 2018
Natural vs Artificial Intelligence and Doing Unexpected Work
In season four episode three of Talking Machines we chat about Neil’s recent thinking (definitely not work) on the core differences between natural intelligence and machine intelligence, he recently wrote blog post on the subject and in the fall of 2017 he gave a TedX talk about the topic. We also take a listener question about what maths you should take to get into building ML tools. Our guests this week are Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University and Margaret Levi Director of the Center for Advanced Study in the Behavioral Sciences(CASBS) at Stanford and Professor of Political Science, Stanford University, and Jere L. Bacharach Professor Emerita of International Studies in the Department of Political Science at the University of Washington. They co-organized a symposium put on by the American Academy of Arts and Sciences and the Royal Society about the future of work. We got a chance to speak to both of them about their work and the event.
More info...
58 min
February 22, 2018
Scientific Rigor and Turning Information into Action
In episode two of season four we're proud to bring you the second annual "Hosts of Talking Machine's Episode"! Ryan and Neil chat about Ali Rahimi's speech at NIPS-17, Kate Crawford's talk The Trouble with Bias, and much more. We also get to hear a conversation with Ciira wa Maina, lecturer in the Department of Electrical and Electronic Engineering Dedan Kimathi University of Technology in Nyeri Kenya
More info...
38 min
February 8, 2018
Code Review for Community Change
On this episode of Talking Machines we take a break from our regular format to talk about the “code review of community culture” that the AI, ML, Stats and Computer Science fields in general need to undergo.  In a blog post, that was put up shortly after NIPS, researcher Kristian Lum outlined several instances of sexual harassment and abuse of power. In her post she mentioned Brad Carlin and a person who she referred to as S. We learned in reporting done by Bloomberg that S was Steven Scott, who was at Google.  As of this posing Carlin is under investigation and Scott has left Google after being suspended.  Today we pause in our regular format to talk about how we, as a community, can change.  Full disclosure: Neil and Katherine served as press chairs for NIPS 2017. They will hold the same post for ICML 2018 and NIPS 2018 and are working along with the other organizers of these events to effect change around these issues.
More info...
35 min
October 5, 2017
The Pace of Change and The Public View of ML
In episode ten of season three we talk about the rate of change (prompted by Tim Harford), take a listener question about the power of kernels, and talk with Peter Donnelly in his capacity with the Royal Society's Machine Learning Working Group about the work they've done on the public's views on AI and ML.
More info...
40 min
September 21, 2017
The Long View and Learning in Person
In episode nine of season three we chat about the difference between models and algorithms, take a listener question about summer schools and learning in person as opposed to learning digitally, and we chat with John Quinn of the United Nations Global Pulse lab in Kampala, Uganda and Makerere University's Artificial Intelligence Research group.
More info...
65 min
September 8, 2017
Machine Learning in the Field and Bayesian Baked Goods
In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with Ernest Mwebaze of Makerere University.
More info...
59 min
August 10, 2017
Data Science Africa with Dina Machuve
In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology we cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa confrence and workshop.
More info...
48 min
July 28, 2017
The Church of Bayes and Collecting Data
In episode six of season three we chat about the difference between frequentists and Bayesians, take a listener question about techniques for panel data, and have an interview with Katherine Heller of Duke
More info...
49 min
July 13, 2017
Getting a Start in ML and Applied AI at Facebook
In episode five of season three we compare and contrast AI and data science, take a listener question about getting started in machine learning, and listen to an interview with Joaquin Quiñonero Candela. For a great place to get started with foundational ideas in ML, take a look at Andrew Ng’s course on Coursera. Then check out Daphne Kohler’s course. Talking Machines is now working with Midroll to source and organize sponsors for our show. In order find sponsors who are a good fit for us, and of worth to you, we’re surveying our listeners. If you’d like to help us get a better idea of who makes up the Talking Machines community take the survey at http://podsurvey.com/MACHINES.
More info...
57 min
June 29, 2017
Bias Variance Dilemma for Humans and the Arm Farm
In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don't get fooled. Our guest for this episode is Jeff Dean,  Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for diabetic retinopathy, and equality in data and the community.   Fun Fact: Geoff Hinton’s distant relative invented the word tesseract. (How cool is that. Seriously.)
More info...
50 min
June 15, 2017
Overfitting and Asking Ecological Questions with ML
In this episode three of season three of Talking Machines we dive into overfitting, take a listener question about unbalanced data and talk with Professor (Emeritus) Tom Dietterich from Oregon State University.
More info...
41 min
May 25, 2017
Graphons and "Inferencing"
In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It's more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft.
More info...
41 min
April 27, 2017
Hosts of Talking Machines: Neil Lawrence and Ryan Adams
Talking Machines is entering its third season and going through some changes. Our founding host Ryan is moving on and in his place Neil Lawrence of Amazon is taking over as co host. We say thank you and good bye to Ryan with an interview about his work.
More info...
33 min
September 1, 2016
ANGLICAN and Probabilistic Programming
In episode seventeen of season two we get an introduction to Min Hashing, talk with Frank Wood the creator of ANGLICAN, about probabilistic programming and his new company, INVREA, and take a listener question about how to choose an architecture when using a neural network.
More info...
44 min
August 18, 2016
Eric Lander and Restricted Boltzmann Machines
In episode sixteen of season two, we get an introduction to Restricted Boltzmann Machines, we take a listener question about tuning hyperparameters,  plus we talk with Eric Lander of the Broad Institute.
More info...
53 min
August 4, 2016
Generative Art and Hamiltonian Monte Carlo
In episode fifteen of season two, we talk about Hamiltonian Monte Carlo, we take a listener question about unbalanced data, plus we talk with Doug Eck of Google’s Magenta project.
More info...
47 min
July 21, 2016
Perturb-and-MAP and Machine Learning in the Flint Water Crisis
In episode fourteen of season two, we talk about Perturb-and-MAP, we take a listener question about classic artificial intelligence ideas being used in modern machine learning, plus we talk with Jake Abernethy of the University of Michigan about municipal data and his work on the Flint water crisis.
More info...
38 min
July 7, 2016
Automatic Translation and t-SNE
In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)
More info...
32 min
June 16, 2016
Fantasizing Cats and Data Numbers
In episode twelve of season two, we talk about generative adversarial networks, we take a listener question about using machine learning to improve or create products, plus we talk with Iain Murray of the University of Edinburgh.
More info...
49 min
June 2, 2016
Spark and ICML
In episode eleven of season two, we talk about the machine learning toolkit  Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.
More info...
39 min
May 19, 2016
Computational Learning Theory and Machine Learning for Understanding Cells
In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.
More info...
40 min
May 5, 2016
Sparse Coding and MADBITS
In episode nine of season two, we talk about sparse coding, take a listener question about the next big demonstration for AI after AlphaGo. Plus we talk with Clement Farabet about MADBITS and the work he’s doing at Twitter Cortex.
More info...
41 min
April 21, 2016
Remembering David MacKay
Recently Professor David MacKay passed away. We’ll spend this episode talking about his extensive body of work and its impacts. We’ll also talk with Philipp Hennig, a research group leader at the Max Planck Institute for Intelligent Systems, who trained in Professor MacKay’s group (with Ryan).
More info...
53 min
April 8, 2016
Machine Learning and Society
Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow.
More info...
48 min
March 24, 2016
Software and Statistics for Machine Learning
In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.
More info...
39 min
March 10, 2016
Machine Learning in Healthcare and The AlphaGo Matches
In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.
More info...
48 min
February 25, 2016
AI Safety and The Legacy of Bletchley Park
In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.
More info...
48 min
February 11, 2016
Robotics and Machine Learning Music Videos
In episode three of season two Ryan walks us through the Alpha Go results and takes a lister question about using Gaussian processes for classifications. Plus we talk with Michael Littman of Brown University about his work, robots, and making music videos. Also not to be missed, Michael’s appearance in the recent Turbotax ad!
More info...
40 min
January 28, 2016
OpenAI and Gaussian Processes
In episode two of season two Ryan introduces us to Gaussian processes, we take a listener question on K-means. Plus, we talk with Ilya Sutskever the director of research for OpenAI. (For more from Ilya, you can listen to our season one interview with him.)
More info...
35 min
January 14, 2016
Real Human Actions and Women in Machine Learning
In episode one of season two, we celebrate the 10th anniversary of Women in Machine Learning (WiML) with its co-founder (and our guest host for this episode) Hanna Wallach of Microsoft Research. Hanna and Jenn Wortman Vaughan, who also helped to found the event, tell us how about how the 2015 event went. Lillian Lee (Cornell), Raia Hadsell (Google Deepmind), Been Kim (AI2/University of Washington), and Corinna Cortes (Google Research) gave invited talks at the 2015 event. WiML also released a directory of women in machine learning, if you’d like to listed, want to find a collaborator, or are looking for an expert to take part in an event, it’s an excellent resource. Plus, we talk with Jenn Wortman Vaughan, about the research she is doing at Microsoft Research which examines the assumptions we make about how humans actually act and using that to inform thinking about our interactions with computers.  Want to learn more about the talks at WiML 2015? Here are the slides from each speaker. Lillian LeeCorinna CortesRaia Hadsell Been Kim
More info...
59 min
November 22, 2015
Open Source Releases and The End of Season One
In episode twenty four we talk with Ben Vigoda about his work in probabilistic programming (everything from his thesis, to his new company) Ryan talks about Tensor Flow and Autograd for Torch, some open source tools that have been recently releases. Plus we talk a listener question about the biggest thing in machine learning this year. This is the last episode in season one. We want to thanks all our wonderful listeners for supporting the show, asking us questions, and making season two possible! We’ll be back in early January with the beginning of season two!
More info...
40 min
November 5, 2015
Probabilistic Programming and Digital Humanities
In episode 23 we talk with David Mimno of Cornell University about his work in the digital humanities (and explore what machine learning can tell us about lady zombie ghosts and huge bodies of literature) Ryan introduces us to probabilistic programming and we take a listener question about knowledge transfer between math and machine learning.
More info...
48 min
October 22, 2015
Workshops at NIPS and Crowdsourcing in Machine Learning
In episode twenty two we talk with Adam Kalai of Microsoft Research New England about his work using crowdsourcing in Machine Learning, the language made of shapes of words, and New England Machine Learning Day. We take a look at the workshops being presented at NIPS this year, and we take a listener question about changing the number of features your data has.
More info...
47 min
October 8, 2015
Machine Learning Mastery and Cancer Clusters
In episode twenty one  we talk with Quaid Morris of the University of Toronto, who is using machine learning to find a better way to treat cancers. Ryan introduces us to expectation maximization and we take a listener question about how to master machine learning.
More info...
26 min
September 24, 2015
Data from Video Games and The Master Algorithm
In episode 20 we chat with Pedro Domingos of the University of Washington, he's just published a book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. We get some insight into Linear Dynamical Systems which the Datta Lab at Harvard Medical School is doing some interesting work with. Plus, we take a listener question about using video games to generate labeled data (spoiler alert, it's an awesome idea!)We're in the final hours of our Fundraising Campaign and we need your help!
More info...
46 min
September 10, 2015
Strong AI and Autoencoders
In episode nineteen we chat with Hugo Larochelle about his work on unsupervised learning, the International Conference on Learning Representations (ICLR), and his teaching style. His Youtube courses are not to be missed, and his twitter feed @Hugo_Larochelle is a great source for paper reviews. Ryan introduces us to autoencoders (for more, turn to the work of Richard Zemel) plus we tackle the question of what is standing in the way of strong AI. Talking Machines is beginning development of season two! We need your help! Donate now on Kickstarter.
More info...
36 min
August 27, 2015
Active Learning and Machine Learning in Neuroscience
In episode eighteen we talk with Sham Kakade, of Microsoft Research New England, about his expansive work which touches on everything from neuroscience to theoretical machine learning. Ryan introduces us to active learning (great tutorial here) and we take a question on evolutionary algorithms. Today we're announcing that season two of Talking Machines is moving into development, but we need your help! In order to raise funds, we've opened the show up to sponsorship and started a Kickstarter and we've got some great nerd cred prizes to thank you with. But more than just getting you a totally sweet mug your donation will fuel journalism about the reality of scientific research, something that is unfortunately hard to find. Lend a hand if you can!
More info...
53 min
August 13, 2015
Machine Learning in Biology and Getting into Grad School
In episode seventeen we talk with Jennifer Listgarten of  Microsoft Research New England about her work using machine learning to answer questions in biology. Recently, With her collaborator Nicolo Fusi, she used machine learning to make CRISPR more efficient and correct for latent population structure in GWAS studies. We take a question from a listener about the development of computational biology and Ryan gives us some great advice on how to get into grad school (Spoiler alert: apply to the lab, not the program.)
More info...
48 min
July 30, 2015
Machine Learning for Sports and Real Time Predictions
In episode sixteen we chat with Danny Tarlow of Microsoft Research Cambridge (in the UK not MA). Danny (along with Chris Maddison and Tom Minka) won best paper at NIPS 2014 for his paper A* Sampling. We talk with him about his work in applying machine learning to sports and politics. Plus we take a listener question on making real time predictions using machine learning, and we demystify backpropagation. You can use Torch, Theano or Autograd to explore backprop more.
More info...
29 min
July 16, 2015
Really Really Big Data and Machine Learning in Business
In episode fifteen we talk with Max Welling, of the University of Amsterdam and University of California Irvine. We talk with him about his work with extremely large data and big business and machine learning. Max was program co-chair for NIPS in 2013 when Mark Zuckerberg visited the conference, an event which Max wrote very thoughtfully about. We also take a listener question about the relationship between machine learning and artificial intelligence. Plus, we get an introduction to change point detection. For more on change point detection check out the work of Paul Fearnhead of Lancaster University. Ryan also has a paper on the topic from way back when.
More info...
23 min
July 2, 2015
Solving Intelligence and Machine Learning Fundamentals
In episode fourteen we talk with Nando de Freitas. He’s a professor of Computer Science at the University of Oxford and a senior staff research scientist Google DeepMind. Right now he’s focusing on solving intelligence. (No biggie) Ryan introduces us to anchor words and how they can help us expand our ability to explore topic models. Plus, we take a question about the fundamentals of tackling a problem with machine learning.
More info...
30 min
June 18, 2015
Working With Data and Machine Learning in Advertising
In episode thirteen we talk with Claudia Perlich, Chief Scientist at Dstillery. We talk about her work using machine learning in digital advertising and her approach to data in competitions. We take a look at information leakage in competitions after ImageNet Challenge this year. The New York Times covered the events, and Neil Lawrence has been writing thoughtfully about it and its impact. Plus, we take a listener question about trends in data size.
More info...
39 min
June 4, 2015
The Economic Impact of Machine Learning and Using The Kernel Trick on Big Data
In episode twelve we talk with Andrew Ng, Chief Scientist at Baidu, about how speech recognition is going to explode the way we use mobile devices and his approach to working on the problem. We also discuss why we need to prepare for the economic impacts of machine learning. We’re introduced to Random Features for Large-Scale Kernel Machines, and talk about how using this twist on the Kernel trick can help you dig into big data. Plus, we take a listener question about the size of computing power in machine learning.
More info...
40 min
May 21, 2015
How We Think About Privacy and Finding Features in Black Boxes
In episode eleven we chat with Neil Lawrence from the University of Sheffield. We talk about the problems of privacy in the age of machine learning, the responsibilities that come with using ML tools and making data more open. We learn about the Markov decision process (and what happens when you use it in the real world and it becomes a partially observable Markov decision process) and take a listener question about finding insights into features in the black boxes of deep learning.
More info...
33 min
May 7, 2015
Interdisciplinary Data and Helping Humans Be Creative
In Episode 10 we talk with David Blei of Columbia University. We talk about his work on latent dirichlet allocation, topic models, the PhD program in data that he’s helping to create at Columbia and why exploring data is inherently multidisciplinary. We learn about Markov Chain Monte Carlo and take a listener question about how machine learning can make humans more creative.
More info...
34 min
April 23, 2015
Starting Simple and Machine Learning in Meds
In episode nine we talk with George Dahl, of  the University of Toronto, about his work on the Merck molecular activity challenge on kaggle and speech recognition. George recently successfully defended his thesis at the end of March 2015. (Congrats George!) We learn about how networks and graphs can help us understand latent properties of relationships, and we take a listener question about just how you find the right algorithm to solve a problem (Spoiler: start simple.)
More info...
38 min
April 9, 2015
Spinning Programming Plates and Creative Algorithms
On episode eight we talk with Charles Sutton, a professor in the School of Informatics University of Edinburgh about computer programming and using machine learning how to better understand how it’s done well. Ryan introduces us to collaborative filtering, a process that helps to make predictions about taste. Netflix and Amazon use it to recommend movies and items. It's the process that the Netflix Prize competition further helped to hone. Plus, we take a listener question on creativity in algorithms.
More info...
35 min
March 26, 2015
The Automatic Statistician and Electrified Meat
In episode seven of Talking Machines we talk with Zoubin Ghahramani, professor of Information Engineering in the Department of Engineering at the University of Cambridge. His project, The Automatic Statistician, aims to use machine learning to take raw data and give you statistical reports and natural languages summaries of what trends that data shows. We get really hungry exploring Bayesian Non-parametrics through the stories of the Chinese Restaurant Process and the Indian Buffet Process (but remember, there’s no free lunch). Plus we take a listener question about how much we should rely on ourselves and our ideas about what intelligence in electrified meat looks like when we try to build machine intelligences.
More info...
45 min
March 13, 2015
The Future of Machine Learning from the Inside Out
We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). They talk with us about this history (and future) of research on neural nets. We explore how to use Determinantal Point Processes. Alex Kulesza  and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic. Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).
More info...
28 min
February 26, 2015
The History of Machine Learning from the Inside Out
In episode five of Talking Machines, we hear the first part of our conversation with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). Ryan introduces us to the ideas in tensor factorization methods for learning latent variable models (which is both a tongue twister and and one of the new tools in ML). To find out more on the topic, the paper Tensor decompositions for learning latent variable models is a good place to start. You can also take a look at the work of Daniel Hsu, Animashree Anandkumar and Sham M. Kakade Plus we take a listener question about just where statistics stops and machine learning begins.
More info...
32 min
February 12, 2015
Using Models in the Wild and Women in Machine Learning
In episode four we talk with Hanna Wallach, of Microsoft Research. She's also a professor in the Department of Computer Science, University of Massachusetts Amherst and one of the founders of Women in Machine Learning (better known as WiML). We take a listener question about scalability and the size of data sets. And Ryan takes us through topic modeling using Latent Dirichlet allocation (say that five times fast).
More info...
45 min
January 29, 2015
Common Sense Problems and Learning about Machine Learning
On episode three of Talking Machines we sit down with Kevin Murphy who is currently a research scientist at Google. We talk with him about the work he’s doing there on the Knowledge Vault, his textbook, Machine Learning: A Probabilistic Perspective (and its arch nemesis which we won’t link to), and how to learn about machine learning (Metacademy is a great place to start). We tackle a listener question about the dream of a one step solution to strong Artificial Intelligence and if Deep Neural Networks might be it. Plus, Ryan introduces us to a new way of thinking about questions in machine learning from Yoshua Bengio’s Lab at the University of Montreal out lined in their new paper, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, and Katherine brings up Facebook’s release of open source machine learning tools and we talk about what it might mean. If you want to explore some open source tools for machine learning we also recommend giving these a try:Super big list of ML Open Source Projects! Torch Gaussian Process Machine Learning ToolboxPyMCMalletStanWekaTheanoCaffeSpearmint
More info...
40 min
January 15, 2015
Machine Learning and Magical Thinking
Today on Talking Machines we hear from Google researcher Ilya Sutskever about his work, how he became interested in machine learning, and why it takes a little bit of magical thinking. We take your questions, and explore where the line between human programming and computer learning actually is. And we sift through some news from the field, Ryan explains the concepts behind one of the best papers  at NIPS this year, A * Sampling, and Katherine brings up an open letter about research priorities and ethical questions that was recently published.
More info...
35 min
January 1, 2015
Hello World!
In the first episode of Talking Machines we meet our hosts, Katherine Gorman (nerd, journalist) and Ryan Adams (nerd, Harvard computer science professor), and explore some of the interviews you'll be able to hear this season. Today we hear some short clips on big issues, we'll get technical, but today is all about introductions.We start with Kevin Murphy of Google talking about his textbook that has become a standard in the field. Then we turn to Hanna Wallach of Microsoft Research NYC and UMass Amherst and hear about the founding of WiML (Women in Machine Learning). Next we discuss academia's relationship with business with Max Welling from the University of Amsterdam, program co-chair of  the 2013 NIPS conference (Neural Information Processing Systems). Finally, we sit down with three pillars of the field Yann LeCun, Yoshua Bengio, and Geoff Hinton to hear about where the field has been and where it might be headed.
More info...
41 min
Feedback on the new Podbay?
    1x
    15
    15
    00:00:00
      00:00:00