The Best Programming Languages for Data Science and Machine Learning

woman coding on computer

Newcomers to data science or artificial intelligence frequently ask me the best programming language to learn to build machine learning algorithms. Thus, I wrote this article as a reference for anyone who wants to know the answer to that question. These are what I consider the three most important languages, ranked in terms of usefulness based on both overall popularity within the data science community and my own personal experiences:

Best Programming Languages for Machine Learning:
#1 Choice: Python
#2 Choice: R
#3 Choice: Java
#4 Choice: C/C++

#1 Programming Language: Python

Python is the most popular language to use for machine learning and for three good reasons.

First, it’s package-based style allows you to utilize efficient machine learning and statistical packages that others have made, preventing you from having to constantly reinvent the wheel for common problems. Many if not most of the best packages (like NumPy, pandas, scikit learn, etc.) are in Python. This almost allows you to “cheat” when programing machine learning algorithms.

Second, Python is a powerful and flexible all-purpose language, so if you are building a machine learning algorithm to do something, then you can easily build the code for the other overall product or system in which you will use the algorithm without having to switch languages or softwares. It supports object-oriented, functional, and procedure-oriented programming styles, giving the programmer flexibility in how to code, allowing you to use whatever style or combination of various styles you like best or fits the specific context.

Third, unlike a language like Java or C++, Python does not require elaborate setup to program a single line of code. Even though you can easily build the coding infrastructure if you need to, if you only need to run a simple command or test, you can start immediately.

When I program in Python, I personally love using Jupyter Notebook, since its interface allows me to both code and to easily show my code and findings as a report or document. Another data scientist can simultaneously read and analyze my code and its output at the same time. I personally wish more data scientists published their papers and reports in Jupyter Notebook or other notebooks like it because of this.

If you have time to learn a single programming language for machine learning, I would strongly recommend it be Python. The next three languages, R, Java, and C++, do not match its ease and popularity within data science.

#2 Programming Language: R

R is a popular language for statisticians, a programming language that is specifically tailored for advanced statistical analysis. It includes many well-developed packages for machine learning but is not as popular with data scientists as Python. For example, in Towards Data Science’s survey, 57% of data scientists reported using Python, with 33% prioritizing it, and only 31% reported using R, with 17% prioritizing it. This seems to show that R is a complementary, not primary language for data science and machine learning. Most R packages have their equivalent in Python (and to some extent the other way around). Unlike Python, which is an all-purpose language, able to do other wonders other than analyzing data and developing machine learning algorithms, R is specifically tailored to statistics and data analysis, not able to do much beyond that. Saying this, though, R programmers are increasingly developing more and more packages for it, allowing it to do more and more.

source codes screenshot

#3 Programming Language: Java

Java was once the most popular language around, but Python has dethroned it in the last few years. As an avid Java programmer who programs in Java for fun, it breaks my heart to put it so far down the list, but Python is clearly a better language for data science and machine learning. If you are working in an organization or other context that still uses Java for part or all of its software infrastructure, then you may be stuck using it, but most recent developments, particularly in machine learning, have occurred in Python and in R (and a few other languages). Thus, if you use Java, you’ll frequently find yourself having to unnecessarily reinventing the wheel.

Plus, one major con of Java is that conducting quick, on-the-go analysis is not possible, since one must write a whole coding system before one can do a single line of code. Java can be popular in certain contexts, where the surrounding applications/software that utilize the machine learning algorithms are in Java, common in finance, front-end development, and companies that have been using Java-based software.

#4 Programming Languages: C/C++

The same Towards Data Science survey I mentioned above lists C/C++ as the second most popular data science and machine learning language after Python. Java follows them closely, yet I included Java and not C/C++ as third because I personally find Java to be a better overall language than C or C++. In C or C++, you may frequently find yourself reinventing the wheel – having to develop machine learning algorithms that others have already built in Python – but in some backend systems that have been built C or C++ like in engineering and electronics, you do not have much of an option. C++ has a similar problem with Java as well: lacking the ability to do quick on-the-go coding without having to build a whole infrastructure.

Conclusion

For a beginner to the data science scene, learning a single programming is the most helpful way to enter the field. Use learning a programming language to assess whether data science is for you: if you struggle and do not like programming, then developing machine learning algorithms for a living is probably not a good fit for you.

Many groups are trying to develop softwares that enable machine learning without having to program: DataRobot, Auto-WEKA, RapidMiner, BigML, and AutoML, among many others. The pros and cons and successes and failures of these softwares warrants a separate blog post to itself (one I intend to write eventually). As of now though, these have not replaced programming languages in either practical ability to develop complex machine learning algorithms and in demonstrating that you have the technical computational/programming skills for the field.

For a beginner to the data science scene, learning a single programming is the most helpful way to enter the field. Use learning a programming language to assess whether data science is for you: if you struggle and do not like programming, then data science where you would be developing machine learning algorithms for a living is probably not a good fit for you. Depending on where you work or type of field/tasks you are doing, you might end up using the language(s) or software(s) your team works with so that you can easily work jointly on projects with them. For some areas of work or tasks might prefer certain packages and languages. If you demonstrate that you can already know a complex programming language like Python (or Java or C++), even if that is not the preferred language of their team, then you will likely demonstrate to any hiring manager that you can learn their specific language or software.

Photo credit #1: ThisIsEngineering at https://www.pexels.com/photo/woman-coding-on-computer-3861958/

Photo credit #2: Hitesh Choudhary at https://unsplash.com/photos/D9Zow2REm8U

Photo credit #3: thekirbster at https://www.flickr.com/photos/kirbyurner/30491542972/in/photolist-MQRUEh-2g3E1wf-Nsr8q9-HDKJxu-22VkHJU-2bWRXY2/lightbox/ (Yes, even though it is cool looking, this is not my code.)

Photo credit #4: Steinar Engeland at https://unsplash.com/photos/WDf1tEzQ_SY

Photo credit #5: Markus Spiske at https://unsplash.com/photos/jUWw_NEXjDw

In a Helicopter Overlooking the Wildfire: A Data Science Perspective in A Frontline Hospital During the Covid-19 Pandemic

I worked as a data scientist at a hospital in New York City during the worst of the covid-19 pandemic. Over the spring and summer, we became overwhelmed as the city turned into (and left) the global hotspot for covid-19. I have been processing everything that happened since.

The pandemic overwhelmed the entire hospital, particularly my physician colleagues. When I met with them, I could often notice the combined effects of physical and emotional exhaustion in their eyes and voices. Many had just arrived from the ICU, where they had spent several hours fighting to keep their patients alive only to witness many of them die in front of them, and I could sense the emotional toll that was taking.

My experiences of the pandemic as a data scientist differed considerably yet were also exhausting and disturbing in their own way. I spent several months day-in and day-out researching how many of our patients were dying from the pandemic and why: trying to determine what factors contributed to their deaths and what we could do as a hospital to best keep people alive. The patient who died the night before in front of the doctor I am currently meeting with became, for me, one a single row in an already way-too-large data table of covid-19 fatalities.

I felt like a helicopter pilot overlooking an out-of-control wildfire.[1] In such wildfires, teams of firefighters (aka doctors) position themselves at various strategic locations on the ground to push back the fire there as best they can. They experience the flames and carnage up close and personal. My placement in the helicopter, on the other hand, removes me from ground zero, instead forcing me to see and analyze the fire in its entirety and its sweeping and massive destruction across the whole forest. My vantage point provides a strategic vantage point to determine the best ways to fight it, shielding me from the immediate destruction. Nevertheless, witnessing the vastness of the carnage from the air had its own challenges, stress, and emotional toll.

Being an anthropologist by training, I am accustomed to being “on the ground.” Anthropology is predicated on the idea that to understand a culture or phenomena, one must understand the everyday experiences of those on the ground amidst it, and my anthropological training has instilled an instinct to go straight to and talk to those in the thick of it.

Yet, this experience has taught me that that perception is overly simplistic: the so-called “ground” has many layers to it, especially for a complex phenomenon like a pandemic. Being in the helicopter is another way to be in the thick of it just as much as standing before the flames.

Many in the United States have made considerable and commendable efforts to support frontline health workers. Yet, as the pandemic progresses, and its societal effects grow in complexity in the coming months I think we need to broaden our understanding of where the “frontlines” are and who a “frontline worker” is worthy of our support.

In actual battlefields where the “frontline” metaphor comes from, militaries also set up layered teams to support the logistical needs of ground soldiers who also must frequently put themselves in harm’s way in the process. The frontline of this pandemic seems no different.

I think we need to expand our conceptions of what it means to be on the frontlines accordingly. Like anthropology, modern journalism, a key source of pandemic information for many of us, can fall into the issue of overfocusing on the “worst of the worst,” potentially ignoring the broader picture and the diversity of “frontline” experiences. For example, interviewing the busiest medical caregivers in the worst affected hospitals in the most affected places in the world likely does promote viewership, but only telling those stories ignores the experiences and sacrifices of thousands of others necessary to keep them going.   

To be clear, in this blog, I do not personally care about acknowledgement of my own work nor do I think we should ignore the contributions of these medical professional “ground troops” in any way. Rather, in the spirt of “yes and,” we should extend our understanding of the “frontline workers” to acknowledge and celebrate the contributions of many other essential professionals during this crisis, such as transportation services, food distribution, postal workers, etc. I related my own experiences as a data scientist because they helped me learn this, not for any desire for recognition.

This might help us appreciate the complexity of this crisis and its social effects, and the various types of sacrifices people have been making to address it. As it is becoming increasingly clear that this pandemic is not likely to go anywhere anytime soon, appreciating the full extent of both could help us come together to buckle down and fight it.  


[1] This video helped me understand the logistics of fighting wildfires, a fascinating topic in itself: https://www.youtube.com/watch?v=EodxubsO8EI. Feel free to check it out to understand my analogy in more depth.

Photo Credit #1: ReinhardThrainer at https://pixabay.com/photos/fire-forest-helicopter-forest-fire-5457829/

Photo Credit #2: Pixabay at https://www.pexels.com/photo/backlit-breathing-apparatus-danger-dangerous-279979/

Photo Credit #3: Pixabay at https://www.pexels.com/photo/scenic-view-of-rice-paddy-247599/

How to Analyze Texts with Data Science

flat lay photography of an open book beside coffee mug

A friend and fellow professor, Dr. Eve Pinkser, asked me to give a guest lecture on quantitative text analysis techniques within data science for her Public Health Policy Research Methods class with the University of Illinois at Chicago on April 13th, 2020. Multiple people have asked me similar questions about how to use data science to analyze texts quantitatively, so I figured I would post my presentation for anyone interested in learning more.

It provides a basic introduction of the different approaches so that you can determine which to explore in more detail. I have found that many people who are new to data science feel paralyzed when trying to navigate through the vast array of data science techniques out there and unsure where to start.

Many of her students needed to conduct quantitative textual analysis as part of their doctoral work but struggled in determining what type of quantitative research to employ. She asked me to come in and explain the various data science and machine learning-based textual analysis techniques, since this was out of her area of expertise. The goal of the presentation was to help the PhD students in the class think through the types of data science quantitative text analysis techniques that would be helpful for their doctoral research projects.

Hopefully, it would likewise allow you to determine the type or types of text analysis you might need so that you can then look those up in more detail. Textual analysis, as well as the wider field of natural language processing within which it is a part of, is a quickly up-and-coming subfield within data science doing important and groundbreaking work.

Photo credit: fotografierende at https://www.pexels.com/photo/flat-lay-photography-of-an-open-book-beside-coffee-mug-3278768/

The Four Most Common Data Science Interview Questions and How to Prepare for Them

Interviewing for a data science role can be a daunting task, especially for those new to the field. I have lost count of the number of data science interviews I have had over the years, but here are the four most common questions I have encountered and strategies for preparing for each. Prepping for these questions is a great opportunity to develop your story thesis, the most important part of any data science interview.

Most Common Data Science Questions:
1) Tell me about yourself.
2) Describe a data science job you have worked on.
3) What kind of experience do you have with messy data?
4) What programming languages and software have you used?

Question 1: Tell me about yourself.

This is probably hands down the most common interview question across all industries and fields, not just data science, so the fact that it is the most commonly asked questions in data science interviews may not seem that surprising. A good answer is crucial to establish a favorable first impression and to lay your main story or thesis of who you are that you will come back to throughout the interview.

In data science interviews, I emphasize my passion for using data science tools to help organizations solve complex problems that were previously vexing. If you are unsure what your thesis is, I designed this activity to help people decipher it. Here is an example of how I would describe myself:

“I fell in love with data science because I enjoy helping organizations solve complex problems. In my past roles, I have used my combined data science and social science skills to explore and build solutions for complicated problems for which the typical ways of doing things within the organization have not worked. I am energized by the intellectual stimulation of breaking down complex problems and using data science to develop potential innovative yet useful solutions. What kind of problems do you guys have that has led you to need to find a data scientist like me?”

Your self-description should tell the story of who you are in a way that demonstrates how you would be a natural fit for the role and helpful to the organization. As your interview thesis, if you laid it out well, then every other question you answer will simply involve fleshing out one (or a combination) of those three basic parts of your self-story: 1) Who you are, 2) How your identity makes you a natural fit for the role, and 3) How this would benefit the organization.

Here are four other important observations to note about how I told my story:

  1. I emphasized who I was – an innovator developing unique solutions to complex problems – while showing my innovator identity naturally connects with data science and could be helpful for the organization. You might not consider yourself an “innovator” per se, but the trick is to figure out who you are based on what energizes and impassions you and then show how performing the data science role you are applying for is a natural fit for who you are.
  2. I told the story with normal words, not technical jargon. I have found that many, if not most, of my interviews, especially the first-round interviews, are with employees without technical expertise, and since you often do not know the level of technical expertise of the interviewer, it is better to err on the non-technical side.  
  3. I kept my story positive, only mentioning what I like to do. Sometimes people instinctively try to illustrate what they want by describing things they do not like to do: e.g. “At previous last job, I learned I do not like doing Y, so I am seeking to do X instead” or “I am doing Y, and I hate it. I want out.” I would describe these aspects of my story later if the interviewer asks, but I would stick with the positive at first: only mentioning what I want to do.
  4. I used strong, subjective, even emotional phrases like “fell in love with,” “passionate about,” and “energized by.” At first glance, these phrases might seem overly informal, but I have found they help interviewers remember me. Do not overdo it, but being more vivid and personable is generally helps rather than hurts your interview chances for data science positions.  

Question 2: Describe a data science project you have worked on.

This is the second most common question I encountered, so make sure you come prepared with an exemplar project to showcase. They may ask you a lot of questions about your project, so I would recommend choosing a project where you did an amazing job on, really knocked it out of the park and that you are proud of. Unless there are disclosure issues, post your work on GitHub, a blog, LinkedIn, or somewhere else online, and include a link to it in your job application.

How to explain the project will vary considerably depending on your interviewer’s degree of expertise. I generally start with a non-technical, high level explanation and provide the technical details if the interviewer(s) prompts me to with follow-up questions. This gives the interviewers the freedom to choose the level of technical expertise they would like in their follow-up. A data scientist interviewer worth his or her salt will quickly steer the conversation into more technical aspects of your project that he or she wants to learn more about, but even then, starting non-technical demonstrates that you know how to effectively communicate your work to non-technical audiences as well.

When describing your project, you are effectively telling the story of the project, and most project stories have the following core components:

  • Who: You are probably the story’s protagonist (it is your interview after all, so naturally pick a project or part of a project where you were the primary driver), but there are likely multiple important side characters that you will need to setup, like who commissioned the project, who it was for, who the data was about, and so on.
  • What: The problem, need, or question your project sought to address generally forms the “conflict” of the project story, so be sure to explain what led to the problem, need, or question (in stories, called the inciting incident).
  • When and Where: The timeframe setting/context in which the project took place (e.g., the organization you were working with or a class you took for which the project was for). How long you had to complete the project can also be important to establish.
  • How: What did you to solve the problem. If you tried a lot of approaches before discovering what works, the how includes both your methodological story and your final solution (that is part of the rising and falling action for how you overcame the project). This is the meat of your story. You will want technical and non-technical descriptions of the how:
    • Technical How: Generally, the core two parts of a technical description are the model you used (and any you tried if applicable) and how you determined the features/variables you selected. Another important part might be how you cleaned and/or gleaned the data. 
    • Non-technical How: I have found that non-technical audiences usually do not glean much from either the model I ended using or my feature selection procedure. Instead, I explain what type of functionality I ensured the model had to solve the problem I had just setup: for example, “I built a model that calculated the probability of X phenomena based on data sources A, B, and C, testing various types of models to determine which would do this best, and then discerned which variables among those datasets were the best to use.” For a non-technical audience, that is generally enough. The core component for them is what goes into the model (the data), what result the model produced from it, and how that informed the problem, need, or question driving the project. 

Finally, in your how explanation, make sure you slip in whatever programming languages and software you used: Python, R, SQL, Azure, etc.

  • Why: This is your explanation of why you chose the approach(es) you did for your how. Now, just like with the how, you will need a technical and non-technical explanations of the why.

Make sure your non-technical explanation of why aligns with your non-technical how. I commonly see data scientists make the mistake of going over a non-technical individual’s head by trying to provide a technical why explanation for their non-technical how. In particular, I would not explain the metric or criteria you used to compare models or decide the feature selection procedure in my non-technical explanation, since these will likely lose a non-technical person. If my non-technical how description focused what data the model used and what it did with it, then my non-technical why focuses on why building a model to do that mattered and how it helped others and/or myself in the real world.

  • What happened: This is the result of the project. Did you succeed or fail (or somewhere in between)? Was it useful for whoever you built it for? Were you able to conduct any follow-up analysis after deployment? Maybe most importantly, what did you learn from the experience? In narrative terminology, this is the resolution. The more you can quantitatively measure any outcomes the better. 

These are the basic components of a project story. Here is the most common project I use, and when reading through it, feel free to analyze how I present each component of the story. I wrote this blog for a general audience, so I provided my non-technical how and why.

Question 3: What kind of experience do you have with messy data?

Interviewers ask me this question surprisingly frequently. They usually preface the question by explaining that they at the organization have a lot of messy data that would require cleaning/processing for their future data scientist. This is a great opportunity to showcase your comfort with data science and data science issues.

I typically answering something like this:

“Yes, I have had to organize and clean messy data all the time. That’s par for the course in data science: the running joke among data scientists is that 90% of any data science project is data cleaning, and 10% actually doing anything with it. At least you guys are honest about the fact that your data is messy. When I worked as a consultant, for example, I talked with many organizations about potential data science projects, and if they said their data was clean and ready to go, chances are they were lying either to themselves or to me about how messy and haphazard their data really was. The fact that you are upfront about the messiness of your data tells me that you guys as an organization are realistically assessing where you are and what you need.”

This answer not only establishes that I have handled messy data before but also normalizes the problem in the field as resolvable by an expert (like myself) and compliments them for being up front. Answering this question confidently and positively has uniquely put me at the top of the list as the front runner candidate in some interviews. Giving a good answer to is is a perfect opportunity to endear yourself with your interviewer.

Question 4: What programming languages and/or software have you used?

Even though a technical interviewer might ask this as well, I have encountered this question most frequently among non-technical interviewers. In my experience, fellow data scientist interviewers have more insider ways of deciphering whether you do in fact know data science, but for non-technical interviewers, this question is their initial way to probe that. Sometimes, they will cling to a laundry list of software and/or languages to determine whether you are qualified.

Now, I believe that having experience using the exact combination of softwares that the data science team you would be joining uses is generally not that important a criterion for job success. For a good data scientist, learning another software system or programming language once you know dozens is not that difficult of a task. But their question is completely natural and reasonable coming from their side, so you will have to answer it.

If they open-endedly ask what softwares and languages you have use, list through the ones you have used, maybe starting with the ones you use the most often. I generally start by mentioning Python, since not only is it my favorite language for data science (see this article) but also conveys that I am familiar with programming in general.

More often, though, they might ask whether you have used X software before, often asking whether you have used each software on a list they have in front of them. I would never recommend lying by claiming that you have experience with a software you have never used, but I would recommend recasting a “No” by providing an equivalent software to it that you have worked with. Here is an example:

“No, I have not used Julia, but that is because I prefer using Python for what others might use Julia for. Python is an equivalent high-functioning programming language in complexity, and the data science teams I worked on happened to prefer it over Julia.”

This not only conveys the “No” in a bit more of a positive light, but it shows that you are familiar with the software he or she just mentioned and confident about using it to match your would-be team.   

Question 5: What are you looking for in a job?

Most often, this is the last major question interviewers ask me, but I have gotten it at the beginning as well. They probably save it until the end, because the question transitions very easily to the next part of the interview: either them describing the role or you providing any questions you have.

If you did a good job laying out your thesis story in the first question, then here you simply restate it from a different angle. You already laid the groundwork, and you are just bringing it home at this point. If they ask me this at the beginning of the interview before the “Tell me about yourself” question, then I use this question to retell my thesis story from this new angle.

Here is my typical answer:

“Like I said, I am energized by figuring out how to help organizations solve complex data science problems. Over the years, I have found two concrete things in an organization help me with this. First, I thrive in stimulating work environments where I am given the space and resources to think creatively through problems. Second, I also need to be able to work with people from a variety of backgrounds and disciplines from whom I can learn from and develop innovative approaches to the problem at hand. You guys seem to provide both. [I then conclude by explaining why they seem to provide both based on what you learned about the organization during the interview, or if we have not had a chance to talk about them yet, ask about these within the organization.]”

Notice that the first sentence references my self-explanation answer to the “Tell me about myself” question. If they ask this question before I have given that spiel, I spend about 30 seconds or a minute providing a condensed self-introduction and then continue with the rest of the answer.

Conclusion

These are the five most common data science interview questions I have encountered and how to prepare for them. I have found that when data scientists give advice on how to prepare for job interviews, they often focus on preparing for highly technical, factual questions (e.g., here and here). Even though having a solid data science foundation can be important, refining your overall story thesis – who you are, what you are passionate about doing, and how that relates to this job – is far more important to advance through the interview process.

I have found that humans, even supposedly “nerdy” data scientists, tend to connect with people and stories, so if you can hook them there, they generally remember you better and are more likely to hire you. When you have a compelling story, every other question will naturally fall into place as an intuitive further clarification of that overall story.  

Photo Credit #1: Work With Island at https://unsplash.com/photos/FX2QA0TMEYg

Photo Credit #2: Free-Photos at https://pixabay.com/photos/glasses-reading-glasses-spectacles-1246611/

Photo Credit #3: geralt at https://pixabay.com/illustrations/questions-font-who-what-how-why-2245264/

Photo Credit #4: Darwin Vegher at https://unsplash.com/photos/W_ZYCEUapF0

Photo Credit #5: geralt at https://pixabay.com/illustrations/software-program-cd-dvd-disc-pack-417880/

Photo Credit #6: jenoliver777 at https://pixabay.com/photos/horses-dogs-groundwork-blaze-2888749/

Data Science and the Myth of the “Math Person”

woman holding books

“Data science is doable,” a fellow attendee of the EPIC’s 2018 conference in Honolulu would exclaim like a mantra. The conference was for business ethnographers and UX researchers interested in understanding and integrating data science and machine learning into their research. She was specifically trying to address a tendency she has noticed– which I have seen as well: qualitative researchers and other so-called “non-math people” frequently believe that data science is far too technical for them. This seems ultimately rooted in cultural myths about math and math-related fields like computer science, engineering, and now data science, and in a similar vein as her statement, my goal in this essay is to discuss these attitudes and show that data science, like math, is relatable and doable if you treat it as such.

The “Math Person”

In the United States, many possess an implied image of a “math person:” a person supposedly naturally gifted at mathematics. And many who do not see themselves as fitting that image simply decry that math simply isn’t for them. The idea that some people are inherently able and unable to do math is false, however, and prevents people from trying to become good at the discipline, even if they might enjoy and/or excel at it.

Most skills in life, including mathematical skills, are like muscles: you do not innately possess or lack that skill, but rather your skill develops as you practice and refine that activity. Anybody can develop a skill if they practice it enough.  

Scholars in anthropology, sociology, psychology, and education have documented how math is implicitly and explicitly portrayed as something some people can do and some cannot do, especially in math classes in grade school. Starting in early childhood, we implicitly and sometimes explicitly learn the idea that some people are naturally gifted at math but for others, math is simply not their thing. Some internalize that they are gifted at math and thus take the time to practice enough to develop and refine their mathematical skills; while others internalize that they cannot do math and thus their mathematical abilities become stagnant. But this is simply not true.

Anyone can learn and do math if he or she practices math and cultivates mathematical thinking. If you do not cultivate your math muscle, then well it will become underdeveloped and, then, yes, math becomes harder to do. Thus, as a cruel irony someone internalizing that he or she cannot do math can turn into a self-fulfilling prophecy: he or she gives up on developing mathematical skills, which leads to its further underdevelopment.

Similarly, we cultivate another false myth that people skilled in mathematics (or math-related fields like computer science, engineering, and data science) in general do not possess strong social and interpersonal communication skills. The root for this stereotype lies in how we think of mathematical and logical thinking than actual characteristics of mathematicians, computer scientists, or engineers. Social scientists who have studied the social skills of mathematicians, computer scientists, and engineers have found no discernable difference in social and interpersonal communication skills with the rest of the world.  

Quantitative and Qualitative Specialties

Anyone can learn and do math if he or she practices math and cultivates mathematical thinking.

The belief that some people are just inherently good at math and that such people do not possess strong social and interpersonal communication skills contributes to the division between quantitative and qualitative social research, in both academic and professional contexts. These attitudes help cultivate the false idea that quantitative research and qualitative research are distinct skill sets for different types of people: that supposedly quantitative research can only be done “math people” and qualitative research by “people people.” They suddenly become separate specialties, even though social research by its very nature involves both. Such a split unnecessarily stifles authentic and holistic understanding of people and society.

In professional and business research contexts, both qualitative and quantitative researchers should work with each other and eventually through that process, slowly learn each other’s skills. If done well, this would incentivize researchers to cultivate both mathematical/quantitative, and interpersonal/qualitative research skills.

It would reward professional researchers who develop both skillsets and leverage them in their research, instead of encouraging researchers to specialize in one or the other. It could also encourage universities to require in-depth training of both to train their students to become future workers, instead of requiring that students choose among disciplines that promote one track over the other.

Working together is only the first step, however, whose success hinges on whether it ultimately leads to the integration of these supposedly separate skillsets. Frequently, when qualitative and quantitative research teams work together, they work mostly independently – qualitative researchers on the qualitative aspect of the project and quantitative researchers on the quantitative aspects of the project – thus reinforcing the supposed distinction between them. Instead, such collaboration should involve qualitative researchers developing quantitative research skills by practicing such methods and quantitative researchers similarly developing qualitative skills.

Conclusion

Anyone can develop mathematics and data science skills if they practice at it. The same goes with the interpersonal skills necessary for ethnographic and other qualitative research. Depicting them as separate specialties – even if they come together to do each of their specialized parts in a single research projects – functions stifles their integration as a singular set of tools for an individual and reinforces the false myths we have been teaching ourselves that data science is for math, programming, or engineering people and that ethnography is for “people people.” This separation stifles holistic and authentic social research, which inevitably involves qualitative and quantitative approaches.

Photo credit #1: Andrea Piacquadio at https://www.pexels.com/photo/woman-holding-books-3768126/

Photo credit #2: Antoine Dautry at https://unsplash.com/photos/_zsL306fDck

Photo credit #3: Mike Lawrence at https://www.flickr.com/photos/157270154@N05/28172146158/ and http://www.creditdebitpro.com/

Photo credit #4: Ryan Jacobson at https://unsplash.com/photos/rOYhgmDIOg8

When Is Machine Learning Useful?

In a past blog post, I defined and described what machine learning is. I briefly highlighted four instances where machine learning algorithms are useful. This is what I wrote:

  1. Autonomy: To teach computers to do a task without the direct aid/intervention of humans (e.g. autonomous vehicles)
  2. Fluctuation: Help machines adjust when the requirements and data change over time
  3. Intuitive Processing: Conduct or assist in tasks humans do but are unable to explain how computationally/algorithmically (e.g. image recognition)
  4. Big Data: Breaking down data that is too large to handle otherwise

The goal of this blog post is to explain each in more detail.

Case #1: Autonomy

Car, Automobile, 3D, Self-Driving

The first major use of machine learning centers around teaching computers to do a task or tasks without the direct aid or intervention of humans. Self-driving vehicles are a high-profile example of this: teaching a vehicle to drive (scanning the road and determining how to respond to what is around it) without the aid of or with minimal direct oversight from a human driver.

There are two types basic types of tasks that machine learning systems might perform autonomously:

  1. Tasks humans frequently perform
  2. Tasks humans are unable to perform.

Self-driving cars exemplify the former: humans drive cars, but self-driving cars would perform all or part of the driving process. Another example would be chatbots and virtual assistants like Alexa, Cortana, and Ok Google, which seek to converse with users independently. Such tasks might completely or partially complete the human activity: for example, some customer service chatbots are designed to determine the customer’s issue but then to transfer to a human when the issue has a certain complexity.

Humans have also sought to build autonomous machine learning algorithms to perform tasks that humans are unable to perform. Unlike self-driving cars, which conduct an activity many people do, people might also design a self-driving rover or submarine to drive and operate in a world that humans have so far been unable to inhabit, like other planets in our Solar System or the deep ocean. Search engines are another example: Google uses machine learning to help refine search results, which involves analyzing a massive amount of web data beyond what a human could normally do.

Case #2: Fluctuating Data

Business, Success, Curve, Hand, Draw, Present, Trend

Machine learning is also powerful tool for making sense of and incorporating fluctuating data. Unlike other types of models with fixed processes for how it predicts its values, machine learning models can learn from current patterns and adjust both if the patterns fluctuate overtime or if new use cases arise. This can be especially helpful when trying to forecast the future, allowing the model to decipher new trends if and when they emerge. For example, when predicting stock prices, machine learning algorithms can learn from new data and pick up changing trends to make the model better at predicting the future.

Of course, humans are notorious for changing overtime, so fluctuation is often helpful in models that seek to understand human preferences and behavior. For example, user recommendations – like Netflix’s, Hulu’s, or YouTube’s video recommendation systems – adjust based on the usage overtime, enabling them to respond to individual and/or collective changes in interests.

Case #3: Intuitive Processing

Flat, Recognition, Facial, Face, Woman, System

Data scientist frequently develop machine learning algorithms to teach computers how to do processes that humans do naturally but for which we are unable to fully explain how computationally. For example, popular applications of machine learning center around replicating some aspect of sensory perception: image recognition, sound or speech recognition, etc. These replicate the process of inputting sensory information (e.g. sight and sound) and processing, classifying, and otherwise making sense of that information. Language processing, like chatbots, form another example of this. In these contexts, machine learning algorithms learn a process that humans can do intuitively (see or hear stimuli and understand language) but are unable to fully explain how or why.

Many early forms of machine learning arose out of neurological models of how human brains work. The initial intention of neural nets, for instance, were to model our neurological decision-making process or processes. Now, much contemporary neurological scholarship since has disproven the accuracy of neural nets in representing how our brains and minds work.[i] But, whether they represent how human minds work at all, neural networks have provided a powerful technique for computers to use to process and classify information and make decisions. Likewise, many machine learning algorithms replicate some activity humans do naturally, even if the way they conduct that human task has little to do with how humans would.

Case #4: Big Data

Technology, 5G, Aerial, Abstract Background

Machine learning is a powerful tool when analyzing data that is too large to break down through conventional computational techniques. Recent computer technologies have increased the possibility of data collection, storage, and processing, a major driver in big data. Machine learning has arisen as a major, if not the major, means of analyzing this big data.

Machine learning algorithms can manage a dizzying array of variables and use them to find insightful patterns (like lasso regression for linear modeling). Many big data cases involve hundreds, thousands, and maybe even tens or hundreds of thousands of input variables, and many machine learning techniques (like best subsets selection, stepwise selection, and lasso regression) process the myriads of variables in big data and determine the best ones to use. 

Recent developments computing provides the incredible processing power necessary to do such work (and debatably, machine learning is currently helping to push computational power and provide a demand for greater computational abilities). Hand-calculations and computers several decades ago were often unable to handle the calculations necessary to analyze large information: demonstrated, for example, by the fact that computer scientists invented the now popular neural networks many decades ago, but they did not gain popularity as a method until recent computer processing made them easy and worthwhile to run.

Tractors and other large-scale agricultural techniques coincided historically with the enlargement of farm property sizes, where the such machinery not only allowed farmers to manage large tracks of land but also incentivized larger farms economically. Likewise, machine learning algorithms provide the main technological means to analyze big data, both enabling and in turn incentivized by rise of big data in the professional world.

Conclusion

Here I have described four major uses of machine learning algorithms. Machine learning has become popular in many industries because of at least one of these functionalities, but of course, they are not the only potential current uses. In addition, as we develop machine learning tools, we are constantly inventing more. Given machine learning’s newness compared to many other century-old technologies, time will tell all the ways humans utilize it.

Photo credit #1: Mike MacKenzie at https://www.flickr.com/photos/mikemacmarketing/30212411048/

Photo credit #2: julientromeur at https://pixabay.com/illustrations/car-automobile-3d-self-driving-4343635/

Photo credit #3: geralt at https://pixabay.com/illustrations/business-success-curve-hand-draw-1989130/

Photo credit #4: geralt at https://pixabay.com/illustrations/flat-recognition-facial-face-woman-3252983/

Photo credit #5: mohamed_hassan at https://pixabay.com/illustrations/technology-5g-aerial-4816658/


[i] See Richard, Nagyfi. The differences between Artificial and Biological Neural Networks. 4 September 2018. https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7; and Tcheang, Lili. Are Artificial Neural Networks like the Human Brain? And does it matter? 7 November 2018. https://medium.com/digital-catapult/are-artificial-neural-networks-like-the-human-brain-and-does-it-matter-3add0f029273.

The Stages of Learning a New Data or Programming Skill

Many people have admirably sought to learn data science, data analytics, a programming language, or some other data or programming skill in order to develop themselves professionally and/or seek a new career path. Excitingly, learning such skills has become significantly easier to do online. But this online learning can also foster unrealistic understandings of what learning one of these skills entails, since it can remove prospective learners from the physical community of experts who help introduce prospective learners to the expectations of that field.

The goal of this article is to help rectify that by explaining the basic steps typically needed to develop a mastery of a new data or programming skill. This will hopefully help inform high-level expectations for learning the skill would entail but also help you choose the right courses or set of courses to ensure you develop all three stages.

By data skill, I mean any data field like data science, data analytics, or data engineering, or any specific skill or practice within a data field that someone might seek to learn, and by programming skill I mean the skills necessary to learn and code in a programming language.

These are the three basic learning stages to master any of these topics:

Stage 1: Grasp the basic concepts of the topic
Stage 2: Complete a guided project
Stage 3: Complete a self-directed project

Stage 1: Grasping Basic Concepts

Grasping basic concepts entails learning the relevant vocabulary, syntax, and key approaches. Often programs teach each concept distinctly, one at a time. For example, when learning a new programming language, you might learn the major commands and syntax rules, and for data science, you might learn about each of the most prominent machine learning models one at a time.

This is different from applying the concepts widely, and at this stage, you may not be able to handle mixing all the concepts together in a complex problem yet (that’s Stage 2). Programs often teach the material at this point sequentially (even though that can be difficult for nonlinear learners).

For example, W3Schools provides grounded Stage 1 teaching for most programming languages and data science skills. They provide sequential exercises working through the basic syntax components of a new language, ever so slightly increasing in complexity along the way.

Now, only performing the first stage does not entail a full mastery of topic. After practicing each piece one at a time, you must also transition into Stage 2 where you start to learn how to combine them when completing a more complex problem.

Stage 2: Guided Project

Here you practice putting all the pieces together through a guided project(s). This guided project is a model for how each of the components fit together in an actual project. I liken these to building a Lego kit: following step-by-step instructions to build a cool model (instead of building your own object from scratch, which is Stage 3). They hold your hand through its completion to illustrate what putting all of the isolated skills and concepts together during a complicated project would entail.

Stage 3: Independent Project

In the third stage, you bring everything have learned together to complete a project on your own. Unlike in Stage 2, when they held your hand, you now have the freedom to struggle, which is necessary to learn. You are developing the skills involved in forming and carrying out a project on your own.

At the same time, you are learning what it looks like to implement those skills “in the wild” of a real-life project. In the previous stages, instructors often coddle their students: providing cleaned and perfectly ready-to-do example problems that you might find in a textbook, necessary to learn the basic concepts. Like a Lego kit, the components of the project have been groomed to make what you are producing. In Stage 3, you often start to experience the types of messiness common in real-world projects, when you have to find the pieces you need and/or figure out how to make do with the ones what you have.

For example, among data science learners, this stage is when students first learn to deal with the complexities of finding the right data for their problem; determining the best questions for a given dataset; and/or cleaning inconsistent data. Beforehand, most examples probably had already cleaned data that matched the specific task they were built for.

A certain amount of trial by fire is often needed to learn how to develop your own project. Your instructor(s) might take a little more of a backseat role during this process, looking over what you have done, answering any questions you might have, and nudging you when necessary. In my experience, exploring strategies yourself is the best way to learn Stage 3. Hopefully, at the end of it all, you will produce a nifty project that you can show prospective employers or whoever else you might wish to impress.

Conclusion

These are the three most common stages to develop initial mastery of a new data or programming skill or field. Now, they are the skill levels generally necessary to learn the new skill, but there are plenty of further levels of learning after you complete these. For example, grasping basic data science concepts, completing a guided project, and learning how to conduct your own self-directed data science project would be enough to make you a new inductee into the data science community, but you would still be a newbie data scientist. It is only the tip of the iceberg for what you can learn and how you would grow as a data scientist.

Now, despite calling them stages, not everyone learns them in sequential order, especially given the variety of extenuating circumstances and learning styles. For example, some might complete all three stages for a specific subset of skills in the field they are learning, and then go back to Stage 1 for another subset. Most education programs will include all three stages, more or less in order.

Some education programs, however, might completely lack or provide insufficient resources for one or two stages. Assessing whether a program adequately includes all three can be an effective way to determine how good they are at teaching and whether they are worth your money and/or time. When choosing to learn a new skill, I would recommend a program or combination of programs that includes all three. If a program you want to do or are currently completing lacks one or two of these stages, you can try to find another (hopefully free) way to complete that stage yourself online. For example, online courses and tutorials very frequently fail to provide Stage 3 (and in some cases, Stage 2), so after you complete one, I would recommend finding a project to work on.

Finally, when you are encountering a difficulty learning, it might be because you need to go backwards to a previous stage. For example, when many learners move to Stage 2, they must periodically swing back into Stage 1 to review a few core concepts when they see those concepts applied in a new way. Similarly, when completing a project in Stage 3, there is nothing wrong with reviewing Stage 2 or even Stage 1 materials.  

Now, be careful because you can falsely attribute this. Learning anything can be frustrating. Sometimes the difficulties you are having are not rooted in the need to review or relearn past material, but you simply need to push through with the new material until you start to get it. In those cases, some students revert backwards into a set of material in which they can feel safe and confident instead of challenging themselves. Even in those cases, however, like rocking a car by going into reverse and then drive to get over a bump, quickly going backwards can help launch you forward over the hurdle. In such cases, what is most important is to know yourself – your learning tendencies and how you typically respond – and check in as much as you can with instructors and/or experts in the field who have been there and done that to help you determine the best ways to overcome whatever challenge you are having.  

Photo credit #1: Jukan Tateisi at https://unsplash.com/photos/bJhT_8nbUA0

Photo credit #2: qimono at https://pixabay.com/illustrations/cog-wheels-gear-wheel-machine-2125178/

Photo credit #3: Bonneval Sebastien at https://unsplash.com/photos/lG-6_ox_UXE

Photo credit #4: Holly Mandarich at https://unsplash.com/photos/UVyOfX3v0Ls

Photo credit #5: George Bakos at https://unsplash.com/photos/VDAzcZyjun8

What Is Data Science and Machine Learning? A Short Guide for the Unsure

 What is data science, and what is machine learning? This is a short overview for someone who has never heard of either.

What Is Data Science?

 In the abstract, data science is an interdisciplinary field that seeks to use algorithms to organize, process, and analyze data. It represents a shift towards using computer programing, specifically machine learning algorithms, and other, related computational tools to process and analyze data.

By 2008, companies starting using the term data scientists to refer to a growing group of professionals utilizing advanced computing to organize and analyze large datasets,[i] and thus from the get-go, the practical needs of professional contexts have shaped the field. Data science combines strands from computer science, mathematics (particularly statistics and linear algebra), engineering, the social sciences, and several other fields to address specific real-world data problems.

On a practical level, I consider a data scientist someone who helps develop machine learning algorithms to analyze data. Machine learning algorithms form the central techniques/tools around what constitutes data science. For me personally, if it does not involve machine learning, it is not data science.

What Is Machine Learning?

 Machine learning is a complex term: What to say that a machine “learns”? Overtime data scientists have provided many intricate definitions of machine learning, but its most basic, machine learning algorithms are algorithms that adapt/modify how their approach to a task based on new data/information overtime.

Herbert Simon provides a commonly used technical definition: “Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the task or tasks drawn from the same population more efficiently and more effectively the next time.”[ii] As this definition implies, machine learning algorithms adapt by iteratively testing its performance against the same or similar data. Data scientists (and others) have developed several types of machine learning algorithms, including decision tree modeling, neural networks, logistic regression, collaborative filtering, support vector machines, cluster analysis, and reinforcement learning among others.

Data scientists generally split machine learning algorithms into two categories: supervised and unsupervised learning. Both involve training the algorithm to complete a given task but differ on how they test the algorithm’s performance. In supervised learning, the developer(s) provide a clear set of answers as a basis for whether the prediction is correct; while for unsupervised learning, whether the algorithm’s performance is much more open-ended. I liken the difference to be like the exams teachers gave us in school: some tests, like multiple choice exams, have clear, right and wrong answers or solutions, but other exams, like essays, are open-ended with qualitative means of determining goodness. Just like the nature of the curriculum determines the best type of exam, which type of learning to performs depends on the project context and nature of the data.

Here are four instances where machine learning algorithms are useful in these types of tasks:

  1. Autonomy: To teach computers to do a task without the direct aid/intervention of humans (e.g. autonomous vehicles)
  2. Fluctuation: Help machines adjust when the requirements or data change over time
  3. Intuitive Processing: Conduct (or assist in) tasks humans do naturally but are unable to explain how computationally/algorithmically (e.g. image recognition)
  4. Big Data: Breaking down data that is too large to handle otherwise

Machine learning algorithms have proven to be a very powerful set of tools. See this article for a more detailed discussion of when machine learning is useful.


[i] Berkeley School of Information. (2019). What is Data Science? Retrieved from https://datascience.berkeley.edu/about/what-is-data-science/.

[ii] Simon in Kononenko, I., & Kukar, M. (2007). Machine Learning and Data Mining. Elsevier: Philadelphia.

Photo credit #1: Frank V at https://unsplash.com/photos/zbLW0FG8XU8

Photo credit #2: Brett Jordan at https://unsplash.com/photos/HzOclMmYryc

Interdisciplinary Anthropology and Data Science Master’s Thesis: A Quick and Dirty Project Summary

This is a quick and dirty summary of my master’s practicum research project with Indicia Consulting over the summer of 2018. For anyone interested in more detail, here is a more detailed report, and here is the final report with Indicia. 

Background

My practicum was the sixth stage of a several year-long research project. The California Energy Commission commissioned this larger project to understand the potential relationship between individual energy consumption and technology usage. In stages one through five, we isolated certain clusters of behavior and attitudes around new technology adoption – which Indicia called cybersensitivity – and demonstrated that cybersensitivity tended to associate with a willingness to adopt energy-saving technology like smart meters.

This led to a key question: How can one identify cybersensivity among a broader population such as a community, county, or state? Answering this question was the main goal of my practicum project.

In the past stages of the research project, the team used ethnographic research to establish criteria for whether someone was a cybersensitive based on several hours of interviews and observations about their technology usage. These interviews and observations certainly helped the research team analyze behavioral and attitudinal patterns, determine what patterns were significant, and develop those into the concept of cybersensitivity, but they are too time- and resource-intensive to perform with an entire population. One generally does not have the ability to interview everyone in a community, county, or state. I sought to address this directly in my project.

TaskTimelineTask NameResearch TechniqueDescription
Task 1June 2015-Sept 2018General Project TasksAdministrative (N/A)Developed project scope and timeline, adjusting as the project unfolds
Task 2July 2015 – July 2016Documenting and analyzing emerging attitudes, emotions, experiences, habits, and practices around technology adoptionSurveyConducted survey research to observe patterns of attitudes and behaviors among cybersensitives/awares.
Task 3Sept 2016 – Dec 2016Identifying the attributes and characteristics and psychological drivers of cybersensitivesInterviews and Participant-ObservationConducted in-depth interviews and observations coding for psych factor, energy consumption attitudes and behaviors, and technological device purchasing/usage.
Task 4*Sept 2016 – July 2017Assessing cybersensitives’ valence with technologyStatistical AnalysisTested for statistically significant differences in demographics, behaviors, and beliefs/attitudes between cyber status groups
Task 5Aug 2017 – Dec 2018  Developing critical insights for supporting residential engagement in energy efficient behaviorsStatistical AnalysisAnalyzed utility data patterns of study participants, comparing it with the general population.
Task 6March 2018 – Aug 2018Recommending an alternative energy efficiency potential modelDecision Tree ModelingConstructed decision tree models to classify an individual’s cyber status

Project Goal

The overall goal for the project was to produce a scalable method to assess whether someone exhibits cybersensitivity based on data measurable across an entire population. In doing this, the project also helped address the following research needs:

  1. Created a method to further to scale across a larger population, assessing whether cybersensitives were more willing to adopt energy saving technologies across a community, county, or state
  2. Provided the infrastructure to determine how much promoting energy-saving campaigns targeting cybersensitives specifically would reduce energy consumption in California
  3. Helped the California Energy Commission determine the best means to reach cybersensitives for specific energy-saving campaigns

The Project

I used machine learning modeling to create a decision-making flow to isolate cybersensitives in a population. Random forests and decision trees produced the best models for Indicia’s needs: random forests in accuracy and robustness and decision trees in human decipherability. Through them, I created a programmable yet human-comprehensible framework to determine whether an individual is cybersensitive based on behaviors and other characteristics that an organization could be easily assess within a whole population. Thus, any energy organization could easily understand, replicate, and further develop the model since it was both easy for humans to read and encodable computationally. This way organizations could both use and refine it for their purposes.

Conclusion

This is a quick overview of my master’s practicum project. For more details on what modeling I did, how I did it, what results it produced, and how it fit within the wider needs of the multi-year research project, please see my full report.

I really appreciated the opportunity it posed to get my hands dirty integrating ethnography and data science to help address a real-world problem. This summary only scratches the surface of what Indicia did with the Californian Energy Commission to encourage sustainable energy usage societally. Hopefully, though, it will inspire you to integrate ethnography and data science to address whatever complex questions you face. It certainly did for me.

Thank you to Susan Mazur-Stommen and Haley Gilbert for your help in organizing and completing the project. I would like to thank my professorial committee at the University of Memphis – Dr. Keri Brondo, Dr. Ted Maclin, Dr. Deepak Venugopal, and Dr. Katherine Hicks – for their academic support as well.

Evaluating the Effectiveness of Part of Speech Augmentation in Next Word Predictors

The following was a project I completed for a graduate course in Artificial Intelligence I took at the University of Memphis in the spring of 2019. For the project, I analyzed whether part of speech evaluation could modulate Markov Chain-based next word predictors. In particular, I developed and tested two different strategies for incorporating part of speech predictions, which I termed excluder and multiplier. The multiplier method performed better than the excluder and matched the performance of the control. Hopefully, this is a helpful exploration into ways to use lexical information to improve next word predictors.

Loader Loading…
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download

Photo Credit: Brett Jordan from https://unsplash.com/photos/EvJ7uvqQb3E