This is the third and final part of three in our conversation. In Part 3, she described the skills and types of people necessary to build and assess artificial intelligence teams.
Dr. Gemma Galdon-Clavell is a leading voice on technology ethics and algorithmic accountability. She is the founder and CEO of Eticas, where her multidisciplinary background in the social, ethical, and legal impact of data-intensive technology allows her and her team to design and implement practical solutions to data protection, ethics, explainability, and bias challenges in AI. She has conceived and architected the Algorithmic Audit Framework which now serves as the foundation for Eticas’s flagship product, the Algorithmic Audit.
For my first interview in the Interview Series, I interviewed Astrid Countee. She is a business anthropologist and technologist with a background in anthropology, software engineering, and data science. She currently works as a user researcher at the peer-to-peer distributed company Holo, as a research associate at The Plenary, as an arts and education nonprofit, and as a co-founder of Missing Link Studios which distributes the This Anthro Life podcast.
If the audio does not play on your computer, you can download it here:
For my second interview in the Interview Series, I interviewed Schaun Wheeler. Schaun is co-founder of Aampe, a startup that embeds an active learning system into mobile apps to turn push notifications into part of the app’s user interface. Before he co-founded Aampe, Schaun was the data science lead for the award-winning Consumer Graph intelligence product at Valassis, a U.S. ad-tech firm. And before that he founded and directed the data science team at Success Academy Charter Schools in New York City. Then before that, Schaun was one of the first people to champion the use of statistical inference to understand massive unstructured data at the United States Department of the Army. Schaun has a Ph.D. in Cultural Anthropology from the University of Connecticut.
If the audio does not play on your computer, you can download it here:
I interviewed Olga Shiyan as part of my Interview Series. In it, she discusses her anti-corruption work in Kazakhstan with Transparency International. In particular, she highlights various projects that have integrated anthropology with data science and statistics.
Olga Shiyan is the Executive Director of the Transparency International’s chapter in Kazakhstan. She specializes in advocacy, legislation and draft laws, and democratic training programs. For this, she has developed research methods that combine anthropology and data science and statistics. In 2019, the Kazakhstan Geographic Society awarder for a medal for anti-corruption work.
To learn more about Olga, feel free to check out the following:
For Part 8 in my Interview Series, I interviewed Scarleth Herrera, a digital anthropologist and founder of Orez Anthropological Research. In it, we discuss her experiences as starting her own digital anthropology research company, transitioning into artificial intelligence-related work, and experiences conducting anthropological research outside of academia.
Scarleth lives in South Florida. Her Orez Anthropological Research is a non-profit dedicated to the exploration and advancing the research of digital anthropology. She is also a Research Scholar at the Ronin Institute in New Jersey. Her current research focus is on the implications artificial intelligence may have on society in general but particularly low-income communities, but she is also passionate about issues facing immigrant communities in the United States.
I recently organized a professional group called EPIC Data Scientists + Ethnographers along with a few others who are both data scientists and ethnographers. Our goal is to form a virtual community to discuss ways to incorporate ethnography and data science, just like I strive to do on this website.
If you are interested in working with others on this or simply interested in learning more, feel free to join. Whether you are both a data scientist and ethnographer, only one of them, or neither, we would love to hear your perspective.
Thank you, EPIC, for helping to develop this and giving us a platform.
Earlier this week, Matt Artz, Astrid Countee, and I ran a workshop at the American Anthropological Association’s 2020 annual conference entitled “Breaking into Tech.” We discussed strategies for anthropologists interested in working in the tech world.
Here is the presentation for anyone who might find it useful but could not attend:
In a previous article, I have discussed the value of integrating data science and ethnography. On LinkedIn, people commented that they were interested and wanted to hear more detail on potential ways to do this. I replied, “I have found explaining how to conduct studies that integrate the two practically is easier to demonstrate through example than abstractly since the details of how to do it vary based on the specific needs of each project.”
In this article, I intend to do exactly that: analyze four innovative projects that in some way integrated data science and ethnography. I hope these will spur your creative juices to help think through how to creatively combine them for whatever project you are working on.
Synopsis:
Project:
How It Integrated Data Science and Ethnography:
Link to Learn More:
No Show Model
Used ethnography to design machine learning software
Used ethnography to understand how users make sense of and behave towards a machine learning system they encounter and how this, in turn, shapes the development of the machine learning algorithm(s)
A medical clinic at a hospital system in New York City asked me to use machine learning to build a show rate predictor in order to inform an improve its scheduling practices. During the initial construction phase, I used ethnography to both understand in more depth understand the scheduling problem the clinic faced and determine an appropriate interface design.
Through an ethnographic inquiry, I discovered the most important question(s) schedulers ask when scheduling their appointments. This was, “Of the people scheduled for a given doctor on a particular day, how many of them are likely to actually show up?” I then built a machine learning model to answer this exact question. My ethnographic inquiry provided me the design requirements for the data science project.
In addition, I used my ethnographic inquiries to design the interface. I observed how schedulers interacted with their current scheduling software, which gave me a sense for what kind of visualizations would work or not work for my app.
This project exemplifies how ethnography can be helpful both in the development stage of a machine learning project to determine machine learning algorithm(s) needs and on the frontend when communicating the algorithm(s) to and assessing its successfulness with its users.
As both an ethnographer and a data scientist, I was able to translate my ethnographic insights seamlessly into machine learning modeling and API specifications and also conducted follow-up ethnographic inquiries to ensure that what I was building would meet their needs.
Project 2: Cybersensitivity Study
I conducted this project with Indicia Consulting. Its goal was to explore potential connections between individuals’ energy consumption and their relationship with new technology. This is an example of using ethnography to explore and determine potential social and cultural patterns in-depth with a few people and then using data science to analyze those patterns across a large population.
We started the project by observing and interviewing about thirty participants, but as the study progressed, we needed to develop a scalable method to analyze the patterns across whole communities, counties, and even states.
Ethnography is a great tool for exploring a phenomenon in-depth and for developing initial patterns, but it is resource-intensive and thus difficult to conduct on a large group of people. It is not practical for saying analyzing thousands of people. Data science, on the other hand, can easily test the validity across an entire population of patterns noticed in smaller ethnographic studies, yet because it often lacks the granularity of ethnography, would often miss intricate patterns.
Ethnography is also great on the back end for determining whether the implemented machine learning models and their resulting insights make sense on the ground. This forms a type of iterative feedback loop, where data science scales up ethnographic insights and ethnography contextualizes data science models.
Thus, ethnography and data science cover each other’s weaknesses well, forming a great methodological duo for projects centered around trying to understand customers, users, colleagues, or other users in-depth.
Project 3: Facebook Newsfeed Folk Theories
In their study, Motahhare Eslami and her team of researchers conducted an ethnographic inquiry into how various Facebook users conceived of how the Facebook Newsfeed selects which posts/stories rise to the top of their feeds. They analyze several different “folk theories” or working theories by everyday people for the criteria this machine learning system uses to select top stories.
How users think the overall system works influences how they respond to the newsfeed. Users who believe, for example, that the algorithm will prioritize the posts of friends for whom they have liked in the past will often intentionally like the posts of their closest friends and family so that they can see more of their posts.
Users’ perspectives on how the Newsfeed algorithm works influences how they respond to it, which, in turn, affects the very data the algorithm learns from and thus how the algorithm develops. This creates a cyclic feedback loop that influences the development of the machine learning algorithmic systems over time.
Their research exemplifies the importance of understanding how people think about, respond to, and more broadly relate with machine learning-based software systems. Ethnographies into people’s interactions with such systems is a crucial way to develop this understanding.
In a way, many machine learning algorithms are very social in nature: they – or at least the overall software system in which they exist – often succeed or fail based on how humans interact with them. In such cases, no matter how technically robust a machine learning algorithm is, if potential users cannot positively and productively relate to it, then it will fail.
Ethnographies into the “social life” of machine learning software systems (by which I mean how they become a part of – or in some cases fail to become a part of – individuals’ lives) helps understand how the algorithm is developing or learning and determine whether they are successful in what we intended them to do. Such ethnographies require not only in-depth expertise in ethnographic methodology but also an in-depth understanding how machine learning algorithms work to in turn understand how social behavior might be influencing their internal development.
Project 4: Thing Ethnography
Elise Giaccardi and her research team have been pioneering the utilization of data science and machine learning to understand and incorporate the perspective of things into ethnographies. With the development of the internet of things (IOT), she suggests that the data from object sensors could provide fresh insights in ethnographies of how humans relate to their environment by helping to describe how these objects relate to each other. She calls this thing ethnography.
This experimental approach exemplifies one way to use machine learning algorithms within ethnographies as social processes/interactions in of themselves. This could be an innovative way to analyze the social role of these IOT objects in daily life within ethnographic studies. If Eslami’s work exemplifies a way to graft ethnographic analysis into the design cycle of machine learning algorithms, Giaccardi’s research illustrates one way to incorporate data science and machine learning analysis into ethnographies.
Conclusion
Here are four examples of innovative projects that involve integrating data science and ethnography to meet their respective goals. I do not intend these to be the complete or exhaustive account of how to integrate these methodologies but as food for thought to spur further creative thinking into how to connect them.
For those who, when they hear the idea of integrating data science and ethnography, ask the reasonable question, “Interesting but what would that look like practically?”, here are four examples of how it could look. Hopefully, they are helpful in developing your own ideas for how to combine them in whatever project you are working on, even if its details are completely different.
This is a research paper I wrote for a master’s course on Applied Anthropology at the University of Memphis. The overall master’s program sought to train students in applied anthropology, and the goal of this course was to teach the foundations of what applied anthropology is, in contrast to other types of anthropology.
Even though I found the course interesting, its curriculum lacked the readings and perspectives of applied anthropologists in the business world. As I discuss in the paper, statistically speaking, a significant number of applied anthropologists (and a University of Memphis’s applied anthropology program alum) work in the business sector, so excluding them leaves out what might be the largest group of applied anthropologists from their own field. I wrote this essay as a subtle nudge to encourage the course designers to add the works of business anthropologists, particularly UX researchers, into their curriculum.
Due to the lack of resources by applied business anthropologists in the curriculum, I had to assemble my own resources entirely by myself. Other applied anthropologists have told me they have encountered this as well. So, hopefully, in addition to the essay potentially providing helpful analysis of applied business anthropology, its bibliography might also provide a starting collection of business anthropology resources for you to explore.
Data science’s popularity has grown in the last few years, and many have confused it with its older, more familiar relative: statistics. As someone who has worked both as a data scientist and as a statistician, I frequently encounter such confusion. This post seeks to clarify some of the key differences between them.
Before I get into their differences, though, let’s define them. Statistics as a discipline refers to the mathematical processes of collecting, organizing, analyzing, and communicating data. Within statistics, I generally define “traditional” statistics as the the statistical processes taught in introductory statistics courses like basic descriptive statistics, hypothesis testing, confidence intervals, and so on: generally what people outside of statistics, especially in the business world, think of when they hear the word “statistics.”
Data science in its most broad sense is the multi-disciplinary science of organizing, processing, and analyzing computational data to solve problems. Although they are similar, data science differs from both statistics and “traditional” statistics:
Difference
Statistics
Data Science
#1
Field of Mathematics
Interdisciplinary
#2
Sampled Data
Comprehensive Data
#3
Confirming Hypothesis
Exploratory Hypotheses
Difference
#1: Data Science Is More than a Field of Mathematics
Statistics is a field of mathematics; whereas, data science refers to more than just math. At its simplest, data science centers around the use of computational data to solve problems,[i] which means it includes the mathematics/statistics needed to break down the computational data but also the computer science and engineering thinking necessary to code those algorithms efficiently and effectively, and the business, policy, or other subject-specific “smarts” to develop strategic decision-making based on that analysis.
Thus, statistics forms a crucial component of data science, but data science includes more than just statistics. Statistics, as a field of mathematics, just includes the mathematical processes of analyzing and interpreting data; whereas, data science also includes the algorithmic problem-solving to do the analysis computationally and the art of utilizing that analysis to make decisions to meet the practical needs in the context. Statistics clearly forms a crucial part of the process of data science, but data science generally refers to the entire process of analyzing computational data. On a practical level, many data scientists do not come from a pure statistics background but from a computer science or engineering, leveraging their coding expertise to develop efficient algorithmic systems.
Difference
#2: Comprehensive vs Sample Data
In statistical studies, researchers are often unable to analyze the entire population, that is the whole group they are analyzing, so instead they create a smaller, more manageable sample of individuals that they hope represents the population as a whole. Data science projects, however, often involves analyzing big, summative data, encapsulating the entire population.
The tools of traditional statistics work well for scientific studies, where one must go out and collect data on the topic in question. Because this is generally very expensive and time-consuming, researchers can only collect data on a subset of the wider population most of the time.
Recent developments in computation, including the ability to gather, store, transfer, and process greater computational data, have expanded the type of quantitative research now possible, and data science has developed to address these new types of research. Instead of gathering a carefully chosen sample of the population based on a heavily scrutinized set of variables, many data science projects require finding meaningful insights from the myriads of data already collected about the entire population.
Difference
#3: Exploratory vs Confirming
Data scientists often seek to build models that do something with the data; whereas, statisticians through their analysis seek to learn something from the data. Data scientists thus often assess their machine learning models based on how effectively they perform a given task, like how well it optimizes a variable, determines the best course of action, correctly identifies features of an image, provides a good recommendation for the user, and so on. To do this, data scientists often compare the effectiveness or accuracy of the many models based on a chosen performance metric(s).
In traditional statistics, the questions often center around using data to understand the research topic based on the findings from a sample. Questions then center around what the sample can say about the wider population and how likely its results would represent or apply to that wider population.
In contrast, machine learning models generally do not seek to explain the research topic but to do something, which can lead to very different research strategy. Data scientists generally try to determine/produce the algorithm with the best performance (given whatever criteria they use to assess how a performance is “better”), testing many models in the process. Statisticians often employ a single model they think represents the context accurately and then draw conclusions based on it.
Thus, data science is often a form of exploratory analysis, experimenting with several models to determine the best one for a task, and statistics confirmatory analysis, seeking to confirm how reasonable it is to conclude a given hypothesis or hypotheses to be true for the wider population.
A lot of scientific research has been theory confirming: a scientist has a model or theory of the world; they design and conduct an experiment to assess this model; then use hypothesis testing to confirm or negate that model based on the results of the experiment. With changes in data availability and computing, the value of exploratory analysis, data mining, and using data to generate hypotheses has increased dramatically (Carmichael 126).
Data science as a discipline has been at the
forefront of utilizing increased computing abilities to conduct exploratory work.
Conclusion
A data scientist friend of mine once quipped to me that data science simply is applied computational statistics (c.f. this). There is some truth in this: the mathematics of data science work falls within statistics, since it involves collecting, analyzing, and communicating data, and, with its emphasis and utilization of computational data, would definitely be a part of computational statistics. The mathematics of data science is also very clearly applied: geared towards solving practical problems/needs. Hence, data science and statistics interrelate.
They differ, however, both in their formal definitions and practical understandings. Modern computation and big data technologies have had a major influence on data science. Within statistics, computational statistics also seeks to leverage these resources, but what has become “traditional” statistics does not (yet) incorporate these. I suspect in the next few years or decades, developments in modern computing, data science, and computational statistics will reshape what people consider “traditional” or “standard” statistics to be a bit closer to the data science of today.
For more details, see the following useful resources: