Machine Learning Engineer Spotlight: Mani Sarkar
In our new blog series, we’re interviewing data scientists and machine learning engineers about their career paths, areas of interest and thoughts on the future of AI. We kick off this week with a 20-year veteran and jack-of-all-trades when it comes to machine learning and data science: Mani Sarkar. Mani is a strategic machine learning engineer based in London, UK, who believes in getting beyond the theoretical and applying AI to real-world problems.
Below is our interview, lightly edited for clarity.
Tell us more about how you got into machine learning.
I started my career as a software developer, writing desktop, web-based applications and command-line tools. The best thing about being a developer is learning new things. I’ve always been interested in data, numbers and math. In the last few years, I got more serious about it.
After 20 years working as a permanent employee, I decided to take charge of my career as a freelancer. I help companies develop proof-of-concept or minimum viable products to secure funding or go to market quickly. I focus on improving performance or the speed of the software development process. My motto is, “Strengthening teams and helping them accelerate!”
There are really no boundaries to what I’m asked to do, so learning data science became a major priority for me. One client inspired me to delve into this subject more. The company wrote bots that could read and write computer code. The bot provides recommendations on how to improve your code as a developer. As someone who’s interested in software quality, it pushed me to pursue practical machine learning and data science projects.
What are you most excited about in the work you are doing these days?
I take a top-down approach to machine learning where I focus on the business problem, as opposed to many others who take a bottom-up approach. I can count on my fingers the entities taking a top-down approach. My philosophy of learning and implementing matches greatly with theirs. Creating programs, content, guides and everything else using this principle is very exciting. Machine learning is an ever-changing and hard-to-grasp field, so it is important to help business users contextualize technical projects.
Autonomy and creativity is a great part of what I do. When I’m free to be creative, I’m able to get the best results for the end-user, the customer or even the community (when I’m working on open source projects).
What is the coolest machine learning problem you have worked toward solving?
Natural language processing (NLP) is a widespread field with many new innovations and advancements. Despite that, at a very basic level, there are no comprehensive tools to analyze tabular text data. There are a lot of fragmented tools and utilities available, but many of them are not open-sourced or widely shared. So, we all end up building our own little solutions to analyze text datasets. Each one of us might do it differently and get a different response.
While preparing for a talk last month, I wrote a simple utility called NLP Profiler in under three hours, which is now going to be part of the Better NLP library. When given a dataset and a column name with text data, NLP Profiler will return either high-level insights about the text or low-level/granular statistical information about the same text. Think of it as using the pandas.describe() function or running Pandas Profiling on your data frame, but for datasets containing text columns rather than columnar datasets (tabular or spreadsheet-like data where each column may be a different data type like string, numeric, date, etc. This includes most kinds of data commonly stored in a relational database or tab, or .csv files)
I have used it on a few datasets and it has shown some interesting information. High-level information would include things like sentiment analysis, subjectivity/objectivity analysis, grammar or spelling quality check, etc. Low-level details could include the number of words in the sentence, the number of emojis in the text, etc. NLP Profiler can do this analysis using a single line of code. Above all, it can be extended and shared openly with others. This opens a new world of machine learning on text data and can help any NLP engineer or practitioner.
How do you predict machine learning will evolve over the next decade?
Automation will play a big role. Even though there will be big challenges – like privacy, ethics, bias, and more – there will be ways around it using a combination of automation and human intervention. Machine learning will evolve as a human-in-the-loop or AI will assist humans do everything from collecting data, to training models, to analyzing models.
I predict that there are three different pathways that may occur in parallel:
- Fully/partially AI-driven systems: Many of these error-prone systems have been attempted, and I imagine there will be even more failures to learn from in the future.
- AI assisting humans: Rather than AI taking people’s jobs, AI will augment mundane tasks. This will create a cascade of new industries, just like the advent of PCs in the 80s and 90s created countless fulfilling careers.
- Humans doing tasks AI cannot do fully or partially: AI still isn’t good at some tasks humans excel at, which will remain the case for some time.
Since NLP is an area in which I’m particularly interested, I imagine in the future we may see a smart dialogue system with memory and ability to detect context. We’ve already come a long way from the 1960s MIT NLP program, ELIZA. I created a conversational chatbot demo and video based on the logic used to build ELIZA, which people can play with on my Github.
What is the most interesting application of machine learning you have seen out there?
I was recently at an online meetup where a small startup called PolyAI gave a demo of an interactive chatbot app that would take a phone call and make a reservation from any caller. It was used to demonstrate how a person could book a restaurant table via the app. It was amazing to see how the chatbot picked up many of the nuances in the caller’s language or style of speaking (even the accent) that only experienced professionals could have responded to. The accuracy at which the information was delivered to the user was impressive. Even though the caller was a bit unfamiliar with the fact that they were talking to a chatbot and not a human, they could get their points across and were pleasantly surprised.
Another demo I recently experienced was a technical open source application called PyCaret, where the whole process of creating a machine learning model was automated or made available with only a few lines of code in Python. The tool made all kinds of analysis and generated metrics and logs that could be assessed to make further decisions. It even had a way to perform post-model creation analysis using SHAPLY.
What do you see as the biggest challenges in machine learning and AI right now?
Ethics, and privacy are big challenges, which we are all aware of. However, we have not developed General AI yet (or if it’s available, we don’t see it in the mainstream) where these concerns would become more severe.
Specialization in different domains is still a challenge. There’s still a lot of work to be done to master practical applications of AI. Smaller subsets of different domain expertise need to come together to become generally useful to the population.
On the NLP front, machine understanding of various languages and dialects will continue to remain an issue, even though not for long. The accuracy of general NLP systems’ results still needs work before we can accept them in the mainstream for domestic or industrial use. That said, many industries have achieved success creating solutions for specific needs (under controlled environments).
The other challenge will be the ability to weave society and these AI creations. With certain inventions and innovations, challenges lie in the lack of knowledge around these systems. Specific areas of concern could include algorithms for crime and law enforcement, or health and safety. Authenticity and the rise of “deep fakes” will continue to be a challenge for some time.
Finally, energy consumption within data centers that specialize in AI/machine learning tasks are having negative consequences on the environment. That’s a global problem that needs to be solved.
If you could change one thing about the public perception of machine learning and AI, what would it be?
The biggest perception to change is that AI is not here to take away our jobs or make lives difficult. When augmented with our skills and knowledge, it can assist us in our regular life and work processes. Contrary to what the entertainment industry might have you believe, AI can’t “think” like us independently. We haven’t reached that stage where we can say we have built an electronic or digital consciousness around us. We may think we are going to be able to build one in the distant future, but that’s a discussion for another time.
Mani is a passionate developer mainly in the Java/JVM space, currently strengthening teams and helping them accelerate when working with small teams and startups, as a freelance software, data, ML engineer.
A Java Champion, JCP Member, OpenJDK contributor, thought leader in the LJC and other developer communities and involved with @adoptopenjdk, @graalvm and other F/OSS projects. Writes code, not just on the Java/JVM platform but in other programming languages, hence likes to call himself a polyglot developer. He sees himself working in the areas of core Java, JVM, JDK, Hotspot, Graal, GraalVM, Truffle, VMs, and Performance Tuning.
An advocate of a number of agile and software craftsmanship practices and a regular at many talks, conferences (Devoxx, VoxxedDays) and hands-on-workshops – speaks, participates, organises and helps out at many of them. Expresses his thoughts often via blog posts (on his own blog site, DZone, Medium and other third-party sites), and microblogs (tweets).
Learn more and connect with Mani here.