This is a research paper I wrote for a master’s course on Applied Anthropology at the University of Memphis. The overall master’s program sought to train students in applied anthropology, and the goal of this course was to teach the foundations of what applied anthropology is, in contrast to other types of anthropology.
Even though I found the course interesting, its curriculum lacked the readings and perspectives of applied anthropologists in the business world. As I discuss in the paper, statistically speaking, a significant number of applied anthropologists (and a University of Memphis’s applied anthropology program alum) work in the business sector, so excluding them leaves out what might be the largest group of applied anthropologists from their own field. I wrote this essay as a subtle nudge to encourage the course designers to add the works of business anthropologists, particularly UX researchers, into their curriculum.
Due to the lack of resources by applied business anthropologists in the curriculum, I had to assemble my own resources entirely by myself. Other applied anthropologists have told me they have encountered this as well. So, hopefully, in addition to the essay potentially providing helpful analysis of applied business anthropology, its bibliography might also provide a starting collection of business anthropology resources for you to explore.
I recently integrated ethnography and data science to develop a Show Rate Predictor for an (anonymous) hospital system. Many readers have asked for real-world examples of this integration, and this project demonstrates how ethnography and data science can join to build machine learning-based software that makes sense to users and meets their needs.
Part 1: Scoping out the Project
A particular clinic in the hospital system was experiencing a large number of appointment no-shows, which produced wasted time, frustration, and confusion for both its patients and employees. I was asked to use data science and machine learning to better understand and improve their scheduling.
I started the project by conducting ethnographic research into the clinic to learn more about how scheduling occurs normally, what effect it was having on the clinic, and what driving problems employees saw. In particular, I observed and interviewed scheduling assistants to understand their day-to-day work and their perspectives on no-shows.
One major lesson I learned through all this was that when scheduling an appointment, schedulers are constantly trying to determine how many people to schedule on a given doctor’s shift to ensure the right number of people show up. For example, say 12-14 patients is a good number of patients for Dr. Rodriguez’s (made up name) Wednesday morning shift. When deciding whether to schedule an appointment for the given patient with Dr. Rodriguez on an upcoming Wednesday, the scheduling assistants try to determine, given the appointments currently scheduled then, whether they can expect 12-14 patients to show up. This was often an inexact science. They would often have to schedule 20-25 patients on a particular doctor’s shift to ensure their ideal window of 12-14 patients would actually come that day. This could create the potential for chaos, however, where too many patients arriving on some days and too few on others.
This question – how many appointments can we expect or predict to occur on a given doctor’s shift – became my driving question to answer with machine learning. After checking in with the various stakeholders at the clinic to make sure this was in fact an important and useful question to answer with machine learning, I started building.
Part 2: Building the Model
Now that I had a driving, answerable question, I decided to break it down into two sequential machine learning models:
The first model learned to predict the probability that a given appointment would occur, learning from the history of occurring or no-show appointments.
The second model, using the appointment probabilities from the first model, estimated how many appointments might occur for every doctors’ shift.
The first model combined three streams of data to assess the no-show probability: appointment data (such as how long ago it was scheduled, type of appointment, etc.); patient information, especially past appointment history; and doctor information. I performed extensive feature selection to determine the best subset of variables to use and tested several types of machine learning models before settling on gradient boosting.
The second model used the probabilities in the first model as input data to predict how many patients to expect to come on each doctors’ shift. I settled on a neural network for the model.
Part 3: Building an App
Next, I worked with the software engineers on my team to develop an app to employ these models in real time and communicate the information to schedulers as they scheduled appointments. My ethnographic research was invaluable for developing how to construct the app.
On the back end, the app calculated the probability that all future appointments would occur, updating with new calculations for newly scheduled or edited appointments. Once a week, it would incorporate that week’s new appointment data and shift attendance to each model’s training data and update those models accordingly.
Through my ethnographic research, I observed how schedulers approached scheduling appointments, including what software they used in the process and how they used each. I used that to determine the best ways to communicate that information, periodically showing my ideas to the schedulers to make sure my strategy would be helpful.
I constructed an interface to communicate the information that would complement the current software they used. In addition to displaying the number of patients expected to arrive, if the machine learning algorithm was predicting that a particular shift was underbooked, it would mark the shift in green on the calendar interface; yellow if the shift was projected to have the ideal number of patients, and red if already expected have too many patients. The color-coding allowed easy visualization of the information in the moment: when trying to find an appointment time for a patient, they could easily look for the green shifts or yellow if they had to, but steer clear of the red. When zooming in on a specific shift, each appointment would be color-coded (likely, unlikely, and in the middle) as well based on the probability that it would occur.
Conclusion
This is one example of a projects that integrates data science and ethnography to build a machine learning app. I used ethnography to construct the app’s parameters and framework. It tethered the app in the needs of the schedulers, ensuring that the machine learning modeling I developed was useful to those who would use it. Frequent check-ins before each step in their development also helped confirm that my proposed concept would in fact help meet their needs.
My data science and machine learning expertise helped guide me in the ethnographic process as well. Being an expert in how machine learning worked and what sorts of questions it could answer allowed me to easily synthesize the insights from my ethnographic inquiries into buildable machine learning models. I understood what machine learning was capable (and not capable) of doing, and I could intuitively develop strategic ways to employ machine learning to address issues they were having.
Hence, my dual role as an ethnography and data scientist benefitted the project greatly. My listening skills from ethnography enabled me to uncover the underlying questions/issues schedulers faced, and my data science expertise gave me the technical skills to develop a viable machine learning solution. Without listening patiently through extensive ethnography, I would not have understood the problem sufficiently, but without my data science expertise, I would have been unable to decipher which questions(s) or issue(s) machine learning could realistically address and how.
This exemplifies why a joint expertise in data science and ethnography is invaluable in developing machine learning software. Two different individuals or teams could complete each separately – an ethnographer(s) analyze the users’ needs and a data scientist(s) then determine whether machine learning modeling could help. But this seems unnecessarily disjointed, potentially producing misunderstanding, confusion, and chaos. By adding an additional layer of people, it can easily lead to either the ethnographer(s) uncovering needs way too broad or complex for a machine learning-based solution to help or the data scientist(s) trying to impose their machine learning “solution” to a problem the users do not have.
Developing expertise in both makes it much easier to simultaneously understand the problems or questions in a particular context and build a doable data science solution.
I wrote this essay for my midterm for a course I took on conducting program evaluation as an anthropologist taught by Dr. Michael Duke at the University of Memphis Anthropology Master’s program. In it, I synthesize Donna Mertens’s discussion of employing mixed methods research for program evaluation work in her book, Mixed Methods Design in Evaluation, as a way to present the need for what I call methodological complementarianism.
Methodological complementarianism involves complementing those on the team one is working with by advancing for the complementary perspectives that the team needs. When conducting transdisciplinary work as applied anthropologists, instead of explicitly or implicitly seeking to maintain a “pure” anthropological approach, I think we should have a greater willingness to produce something anew in that environment, even if it no longer fits the “pure” boundaries of proper anthropology or ethnography but rather some kind of hybrid emerging out of the needs of the situation. Methodological complementarianism is one practical way to do that I have been exploring.
On May 21st, Astrid Countee and I presented at the 2021 Response-ability Conference. We discussed strategies for leveraging data science and anthropology in the tech sector to help address societal issues. The Response-ability’s overall goal was to explore how anthropologists and software specialists in the tech sector to understand and tackle social issues.
In the coming months, Response-ability plans to publish our presentation, so if you are interested in watching it, please stay tuned until then. When they make the videos accessible, they should post them here: https://response-ability.tech/2021-summit-videos/.
I appreciated the whole experience. Thank you to everyone who helped make the conference happen, and Astrid for doing this talk with me.
On July 8th, 2021, I presented virtually at the Congress of Anthropologists and Ethnologists of Russia in Tomsk, Siberia, organized by Association of Anthropologists and Ethnologists of Russia. My talk was titled “Integrating Anthropology and Data Science,” which I presented as part of its subcommittee for applied and business anthropology. I discussed the unique opportunities integrating data science could provide anthropologists and potential strategies for how to integrate the two disciplines.