ÌÇÐÄvlog¹ÙÍø

November 21, 2023

Can insights from psychology and economics improve how government operates? Evidence from behavioral science can help governments at all levels deliver public services, recruit and retain employees, and improve processes. Watch this Wiener Conference Call with Elizabeth Linos, faculty director of the People Lab at Harvard Kennedy School, as she shares the latest research and practices that empower the public sector to do its best work.

Wiener Conference Calls recognize Malcolm Wiener’s role in proposing and supporting this series as well as the Wiener Center for Social Policy at Harvard Kennedy School.

- [Host] Welcome to the Wiener Conference Call Series. These one-hour on-the-record phone calls feature leading experts from Harvard Kennedy School who answer your questions on public policy and current events. Wiener Conference Calls recognize Malcolm Wiener's role in proposing and supporting this series, as well as the Wiener Center for Social Policy at Harvard Kennedy School.

- Good day, everyone. I'm Ariadne Valsamis from the Office of Alumni Relations and Resource Development at Harvard's Kennedy School, and I'm very pleased to welcome you to this Wiener Conference Call. These calls are kindly and generously sustained by Dr. Malcolm Wiener and Carolyn Wiener, who support the Kennedy School in this and many other ways. It's an honor to welcome today Professor Elizabeth Linos, who is the Emma Bloomberg Associate Professor of Public Policy and Management and Faculty Director of the People Lab at Harvard Kennedy School. Her research focuses on how to integrate evidence-based policymaking into government using insights from behavioral science. Professor Linos recently received the very prestigious Kershaw Award, which recognizes significant contributions to the field of public policy made before the age of 40. She was honored for her research on how governments can build a more effective workforce and provide better public services. We're so fortunate she's agreed to share her expertise with us today, and we wanna welcome you, Professor Linos. Thank you.

- Thank you, Ariadne. It's so great to be here, and thank you all for joining. I'm thrilled to be able to have this conversation with you, and I'm gonna share some slides, hopefully share a little bit about how I got into this work and some examples of the type of research that we do on my team, and then I'm looking forward to a Q&A. So let me do that right now. So hopefully someone will stop me if it's the wrong version of the slides, but as, thank you, Ariadne, as I was thinking about kind of an overview of my work and the work that we do at the People Lab, I thought it might be useful to talk a little bit about how I got into this work. I was in an incredibly kind of fortunate position, really a privilege of a lifetime to have the opportunity to work for government earlier in my career, almost 15 years ago now in the Greek government under George Papandreou. And I entered government to work in the prime minister's office, along with a whole cohort of young people who were excited to work for government, who wanted to use kind of data and evidence and innovation skills to reform government and ultimately the country where we're from, I'm from Greece. And so it was a really kind of interesting time because we, you know, it was October 20, 2009 and a new government is sworn in. Almost immediately afterwards, we find out that the previous government had fudged statistics that many of you will remember this period. And by the spring of the next year, we were under the IMF as part of kind of a global financial crisis with Greece at its epicenter. And so obviously my work changed a little bit, the work of the whole government changed, but it really brought to light, I think a fundamental challenge that I saw. So we had the whole world literally looking at us and asking us to make major public sector reforms, change policies, change the way we do government. And we had the smartest people in the world helping us to think through what those large-scale policy reforms should be. And I had been lucky to work before my time in government with the Poverty Action Lab led by Nobel Prize winning economists who have done a ton of work around how to evaluate, rigorously evaluate what works. And so I knew there were really smart people thinking about policy design. I knew a lot of really smart people who were thinking about policy evaluation, but it felt like what we needed at that time urgently was more rigorous based evidence on how to actually make all these changes happen, how to really invest in the government workforce, the public servants who were asked to make all these changes while they were being called lazy or being called corrupt while we were cutting their pensions. And that really kind of resonated as a how question for me, how do we actually do the thing of making all these reforms? And it turns out that type of approach, that type of question still holds true today. We work with a lot of governments who are asking these how questions. So they'll say, yes, I understand that we have a human capital crisis in government, but how do I do the thing of recruiting Gen Zs to work in government? Or yes, I understand that trust is important for resident government interactions, but how do I build the trust? Or how do I solve my vacancy challenges? Or how do I reduce disparities in program access? How do I even find the people that should be accessing services? How do I support my staff? Ultimately, how can I figure out what works best and use it in real time? And so if you take this series of questions, and many of you who have worked in government will have a whole other bucket of questions in this kind of implementation structure, and you take them as a starting point for a research agenda, you would essentially come to something that looks like this. You would probably have a bunch of research projects around how to strengthen the government workforce. So how do you recruit, retain, and support the people who work in government? You might have a series of projects around resident government interaction. So how do we make it easier for people to access government programs? How do we make the experience of interacting with government more positive? And you might even have a series of research projects around how do we even make this idea of evidence-based policy making more useful to government? How do we make sure that the ideas that are being presented and studied and rigorously tested in academia or in one context are actually adopted at scale across the policy making environment? And so that's really what we've done at the People Lab and through my research agenda. It's not a coincidence that we've designed a research agenda around these three buckets because the fundamental idea is that we want to be answering questions that a government has actually asked at a really fundamental level. So underlying these buckets of research, there are two kind of broad principles that I'll be curious to hear your reactions to when we get to Q&A. The first is what is seemingly obvious, but in fact is not that obvious for a lot of people, that if we want to solve some of the major systemic policy challenges of our time, that's going to require an investment in the government workforce itself and in government infrastructure. So really making a bold claim that government is not something to be worked around if you want to innovate, it's not something that we should avoid. It's also not something that is just kind of the problem that we need to hold them accountable, but rather an investment in the workforce itself and in the people who are doing this work in government can produce solutions in a really meaningful way. The second potentially even more controversial core belief of my work and the work of the People Lab is that academia has a very active role to play in translating these big policy ideas or these innovations into practice. So as I mentioned to some of you earlier on, I grew up in an academic narrative that said, it's our job to produce the science and then it's somebody else's job to translate or build the bridge or use that in policymaking. And what we try to do in our lab and in my research is to say, no, that's our job as academics. It is a research question to think about how we take evidence of what works and make sure that it is feasible, adoptable, and usable by our government partners. So I thought I'd walk through some examples of what that looks like in practice and then open it up to questions. So for those of you who have worked in government, this first bucket of strengthening the government workforce is gonna sound very familiar. There's a huge ongoing challenge across levels of government in the US and abroad that come out of a pretty major demographic shift. So we have a generation of baby boomers who are retiring and younger people not filling those vacancies. About one in four, a little less than one in four young people say they want to work in government at any point in their career. If you add to that urgent challenges around diversity, inclusion, and equity, you can see a bigger challenge. And this is true globally. If the frontline workers that are delivering services don't look like the communities that they're called to provide services for, then we have worse outcomes. And so we have kind of a human capital crisis, both on just the demographics, the age demographics, as well as challenges with the diversity of the pool of people who work in government. And then COVID happened and confounded that with a pretty significant vacancy in mental health challenges in critical frontline worker occupations. So I could talk about each of these factors. I'll focus on some of the work that we did that might resonate with many of you, sadly, because a lot of us have had this experience. But if you've been feeling emotionally exhausted at work, like you just can't take another day, or you've been feeling mentally distant from the work that you do, or that you just can't actually do the work as well as you would like, you're not alone. Even before the pandemic, about one in five employees reported burnout. These are all dimensions of burnout. During COVID, in a lot of our studies, that went up to one in three. So one in three government workers were reporting burnout. And then if you look at frontline workers, so our teachers, our social workers, our police officers, law enforcement officers, that's about one in two. So, I mean, it's really astounding. Every other frontline worker is experiencing clinical levels of burnout right now. And even though we've used that term a lot during the pandemic, it's only quite recently, only in 2019, when the WHO recognized and defined burnout as an occupational phenomenon. So it's more than stress. It's different than anxiety. I think of it as more similar to depression, but it's depression at work. It's an occupational work-related phenomenon. And therefore, I think the solutions to burnout are also workplace-related solutions. We know from quite a bit of research that burnout is particularly costly, not only to the individual who is experiencing burnout with costs that are estimated to go up to about $190 billion annually for the US, but also to the organization. So burnout is associated with higher sick leave, with higher turnover, potentially has impacts on performance. And where we've tested this or measured this, which is mainly in the medical space, that cost is also really high. So just the cost of medical mistakes and physician turnover due to burnout is about $4.6 billion annually. So when we think about these questions, we really start with this idea of, okay, but what can we do about it? So describing the problem isn't gonna solve anything. And so we use a framework for understanding this challenge called the Job Demands Resources Framework that says, look, in any job, there are gonna be risk factors around burnout, but we're gonna put them in these two categories. On the one hand, we have job demands. So these are the parts of your job that require sustained emotional or cognitive or even physical effort. In the public sector, sometimes that's the nature of the job. So what we ask of our social workers or of our teachers or of our unemployment insurance officers is to make tough decisions with limited information. Oftentimes, long shifts and mandatory overtime are just part of the work. And of course, the nature of what we ask of our frontline workers often requires emotionally demanding conversations with residents and clients. And so, in some ways, it would be fantastic if we could reduce those job demands, but oftentimes we can't, at least in the short term. What we can do is potentially adjust job resources, which is the second bucket, where we have some evidence that peer support and supervisory support, these are kind of the things that you have available to you to do your work, seem to be correlated with improved employee wellbeing. And the theory suggests that these job resources essentially operate as a buffer against the risk of burnout when you have high job demands. So we wanted to take this idea and take it to real government challenges. So I'll give you an example of what this looks like in one setting, but happy to talk about other settings where we've looked at this. We worked with a series of cities that had this challenge in their dispatch centers for 911 dispatchers. So 911 dispatchers are the people who answer the phone when you call the police. What's really interesting about this group is that in most places still in the US, even though they are handling life and death situations and hundreds of calls of people in crisis, they're considered call center workers and not first responders. So they don't have the status or the mental health support or the vacation time that's associated with being a first responder. And so it's potentially not surprising that we see very high levels of burnout, absenteeism and turnover in this population. So what we did is we worked with nine cities in collaboration with the behavioral insights team that had the same challenge and said, we have this challenge with burnout and in our staff, what can you do? And we designed an intervention that is really taking insights from this literature on connectedness and peer support. And the basic idea is we have a lot of evidence that people who already have someone to turn to, someone to talk to at work, someone to share their experiences with end up having better mental health outcomes. But the fundamental challenge is how do you connect otherwise unconnected individuals, people who don't already have that network? And what we did is we created essentially a six week intervention. It was a series of email prompts that encourage people in this treatment to share experiences and advice with each other. So we didn't present this as a mental health intervention. We presented it as an opportunity to share advice and experience with other 911 dispatchers across these nine cities. And we particularly focused on how this could help newbies on the job because it turns out people are much more willing to share advice and experience if they think it's gonna be useful to other people on the job. And even if you didn't participate actively yourself, every week you would receive somebody else's story in your inbox. So you would still get kind of a reminder for these six weeks that there are other people across the country who know what you're going through, who are experiencing something similar to what you're going through, and who are part of the same kind of professional community. So just to give you a sense, this is what the typical email would look like. You'd get a vignette from somebody else's story. You can click to read other people's stories and experiences, and this is all anonymous and online. And you would get a prompt to share your advice and experiences. And then this would go to this. If you did click, it would go to a platform that looks like this. But let me jump to the results. What we do is we measure burnout six months later. So this is just a six week program, but we kind of look over time to see what happens on burnout. And we find a pretty significant reduction in burnout across a bunch of different indicators of burnout. This is about an eight point reduction. And you should ask me, is that a lot or a little? It's about the difference between average burnout for social workers compared to administrative staff. So a pretty meaningful reduction in burnout. But say you're not as interested in that part, you might be more interested in how it affected resignation. So we see six months later, resignations cut by more than half just by this kind of very light touch, low cost intervention because we were really targeting something critical, which is people need to feel understood and need to feel that they're part of a community that understands what they're going through. In the second bucket of research around government resident interactions, again, the challenges are likely known to many of you. The Pew Center has been following trust in the US for many years now. And it's very clear that trust in government is falling. It was already pretty low and it keeps falling. And that has lots of consequences on how we think about resident government interaction. So take up of government programs and services is low or lower than we would like. In interactions with government, either voluntary or involuntary, we see large disparities in who interacts with government, both on things like police stops, but also community engagement efforts or participatory budgeting efforts. So in all cases, there seems to be a lot of room for improvement in how we think about government resident interactions. I'll focus in this category on some of our work on the social safety net. So we've been doing quite a bit of work at the state level and actually at the local level with agencies that are struggling to increase take up of their social safety net. So there's a lot of evidence done by others that shows how important means tested programs are for long-term benefits in terms of economic benefits, health benefits, education benefits. But regardless of what program you look at, whether it's the Earned Income Tax Credit or WIC or SNAP, we see a range of percentages of people who don't utilize the program, even though they're eligible. So somewhere between 20 to 50% of people who are eligible for government assistance programs are not accessing it. And so again, conceptually, we think about this in terms of different types of burdens that people face. On the one hand, you have informational burdens. So you have to know that the program exists. You have to know if you're eligible or what the eligibility requirements are. There's often compliance hurdles or burdens. So there is documentation that you need to bring in. There's interviews you need to show up for all of those things, add barriers that make it harder for people to access benefits. But there's also this third category, which we're gonna call psychological barriers that are more about kind of the stigma associated with being a participant in a government assistance program. Sometimes it's around frustration and anxiety about what that experience is gonna be like. And in a lot of the work that I do, we kind of try to tackle each of these barriers in different projects and have had some interesting findings. I'll point to one example that came out of COVID around rental assistance. So some of you will remember all of these headlines early on in the pandemic where there was a valid risk that a lot of people were about to be evicted from their homes for the first time because they had lost their jobs. And money was flowing from the federal government to local government to try to avoid this huge eviction crisis in the form of rental assistance or emergency rental assistance. But what was happening is money was being distributed but cities themselves were not able to disperse the money fast enough. So take up of that rental assistance money wasn't happening as fast as we would like. And so we worked with a few cities, I'll give you an example from Denver where they had the funds and they wanted to kind of increase the likelihood that people knew about them and took them up. And so we worked with them to even define how to do this government outreach. One of the big challenges that many of you know in government is how do you even find the people who might need your assistance to begin with if you don't have data. So we worked with both public and Denver specific data to try to identify neighborhoods where it would be more likely that there are people there who might be at risk of eviction because of various characteristics. And that led us to 62,000 renters or addresses that we could then send mass outreach to to tell them about the program. And what we did again using insights from behavioral science is we sent people postcards to their home. Again, we don't know a lot about these people. And we said, are you struggling to pay rent? You're not alone. Here's some information about what the city and county can offer. It's in English and Spanish. On the back, it tells you what you need to do next. So really kind of targeted behavioral science intervention to say, go here to apply for rental assistance if you need help. So this is kind of an informational nudge. But what we really were interested in is what's the kind of language that would encourage someone who might be facing this challenge for the first time to even consider applying for rental assistance. So we had a control group that just continued business as usual. We had a postcard that looked like this. And then we had another postcard that looked very similar in its design. I'm just showing you the differences in text here that really tried to de-stigmatize participating in rental assistance. So rather than saying, you're not alone, it says you're not alone and it's not your fault. A lot of people need help right now. Or rather than saying a staff member will determine if you're eligible, we said a staff member will help you determine if you're eligible to really bring the agency back to the individual renter. The idea was really to reduce both internalized stigma or internalized shame that it's your fault that you somehow are in this position as well as anticipated stigma or the fear that somebody else might judge you negatively if you need rental assistance. And what we find is really quite striking. So remember this is mass outreach. So we're sending it across neighborhoods. We don't want the control group rate to be very high because we want only people who are eligible to apply. But just providing information significantly increases the number of application requests and providing that same information with a slightly tweaked de-stigmatizing language does it even more. So this is a 60% increase in application requests compared to the control, 14% increase compared to very similar language. And we see the same pattern with the number of applications submitted, the amount of money that went out the door. So we see the same pattern where the de-stigmatizing language does better. Just to give you a sense of what these numbers mean in practice, about 25% of all the applications that Denver received during this period were from these two groups, from information and the de-stigmatizing group. So a large chunk of the applications they got in total were coming out of these postcard interventions. And what was particularly interesting to me is that it seems like these de-stigmatizing approaches had demographic effects as well. So in the control group, the group that didn't get this postcard was just receiving kind of business as usual communication. About 5% of applicants were from black households with the de-stigmatizing postcard that goes up to 26%. So we see not only an increase overall in the number of applications, we also see a shift in who applies. And we think this has to do with kind of the compounding effects of different types of stigma in the world and in the US. So if there's already stigmatization against black and brown households, the de-stigmatizing language might be particularly effective for those households. I'll wrap up with this kind of third area of research, which is we're currently calling re-imagining evidence-based policymaking. And I don't have solutions in this area yet, but I'll describe the problem in different ways. As many of you know, the kind of the academy has been producing evidence that is policy relevant for decades, if not centuries, but the production of evidence does not necessarily mean that that evidence is being used and adopted in government. And even when good ideas are taken to scale or we're trying to do this, if we actually measure it properly, sometimes the effects are much smaller than we expected. So we saw a big effect in some pilot program or in the lab, we then take it to scale and it doesn't seem to be as effective. And at the same time, kind of the definition of success is shifting. So what a community member defines as a successful policy outcome might not match what the research community is defining as a successful outcome. This has been especially stark in public safety where we have years and years of research where the outcome metric is a reduction in crime. But then when you ask community members who are most impacted by policing, how do you define public safety? They use all sorts of words that have nothing to do with crime reduction or arrests. And so this has kind of brought up a whole debate in evidence-based policymaking about who gets to even define the measures that we use. I'm gonna focus on a project that we've been doing around evidence adoption. So what happens after we have the evidence? Over a few years, the behavioral insights team, where I was head of research and evaluation, ran lots of really rigorous randomized control trials with cities across the US as part of the What Works Cities initiative. And so what we did in this project is we said, okay, we successfully got cities to do the thing that we've been asking governments to do. They ran randomized control trials, really rigorously in their context with data-driven teams to figure out what works on some outcome that they cared about. And we went back and we said, okay, five years later, did you adopt the thing that you tested in your context? Did you adopt the treatment that you were testing five years ago? And what we find is that about 27%, so almost 30% of those treatments were adopted five years later at scale. Now, depending on your perspective, this is either a very large amount or very small amount, depends what you predicted. But what was striking for me is, and kind of heartbreaking, is that the strength of the evidence was not a predictor of whether or not something got adopted. So we spent so much time in the research community worrying about key values and statistical significance and whether or not the evidence is rigorous and strong. It didn't seem like that was a predictor on whether or not the evidence is used, which is a totally different outcome. Even kind of traditional measures of state capacity, like your population or your budget, don't seem to predict whether or not this evidence that was produced is actually adopted at scale. We have some evidence that some topics or nudges are more palatable, but it's not particularly strong. What I had predicted was gonna be the main predictor was staff retention. So I thought if the staff member who ran the study was still in government, surely the idea would have been adopted at scale. And we have some evidence of that, but it turns out the main predictor of whether or not an innovation is adopted is whether or not that innovation was tweaking a preexisting process. So new innovations, things that don't come with a preexisting infrastructure don't seem to be adopted at all. Tweaks to existing communication end up being adopted quite a bit, and there the evidence matters. So conditional on having that infrastructure, then the right kind of evidence-based communication is used. So this to me opens up a whole host of questions, which I'm sure we're gonna discuss about how do we support evidence adoption more broadly after we have that evidence so that it's not just small tweaks that get adopted, but new innovations, if they work, actually can make it to scale. So where do we go from here? My lab is doing a bunch of research right now on all of these areas. We're thinking about how to recruit and retain the next generation of government talent. We're doing a lot of work on fellowship programs and thinking about their impact on who enters government. We're continuing to do work around trust and stigma in state resident interactions and thinking about how both trust and stigma shape resident experiences more broadly. And as I just mentioned, we're really excited to think through what are the types of investments pre and post testing that will increase adoption of bigger ideas. And I should say my secret cunning plan for the lab is that we're also building a whole new generation of scholars who don't need to pick between policy impact and rigor and are really thinking about research that matters and is useful to policy makers, but continues to be rigorous for academic audiences as well. So I'll stop there and say thank you with the smiling faces of my team. And I'm really looking forward to our Q&A.

- Terrific. Thank you so much, Elizabeth. Really appreciate that. We're gonna open the session up for questions. Please, to ask your question, use the virtual hand-raising feature of Zoom and in true Kennedy School fashion, please keep your question brief and end it with a question mark. You'll be notified via the chat feature when it's your turn to speak. And please be sure to unmute yourself when you hear from the staff. And all of us on the call would appreciate it if you could share your Kennedy School affiliation. I'm gonna start things off by asking a question inspired by Jose Litra Neto, who is an MPP 2015. And Jose would like to know, has the People Lab worked with Government Innovation Labs? Maybe you could share what a Government Innovation Lab is to define, measure and scale new policies and practices. Can you offer some examples of the best ways for such labs to harness the insights of researchers?

- Yeah, it's a great question. So all the work as you saw that we do is in collaboration with government partners. So every project that we do in some ways is both defined by our government partners and then co-designed with our government partners. But I think what you're asking about Jose is the teams within government that have kind of an innovation label, or sometimes they're called evaluation teams or data and research teams. They have all sorts of names, nudge units. We do do work with those teams when they exist. So for example, we've been very fortunate to work with one of the best teams in the federal government on the stuff, the Office of Evaluation Sciences, which is based at GSA. So it's the government employees. And they do a lot of work to provide social and behavioral science research for the whole federal government. And so we have IPA. So members of my team, myself included, are fellows through an IPA agreement with this government team to provide research on areas that we have particular expertise. So we've been working through OES on projects related to the PMA Learning Agenda. So the Presidential Management Agenda Learning Agenda. The first priority of which is the workforce, which I'm really excited about. Also when the pandemic hit and the government, the federal government wanted to do pulse surveys of all 2 million government employees. We worked through that team to help choose the questions, run the actual infrastructure to do those pulse surveys. So we ran three rounds of pulse surveys for all employees on key priorities. And then we do the same thing at state and local level. So for example, we've been doing a lot of work on the Child Tax Credit with the California Department of Social Services. They have a team called RAD, which is like their innovation team, research and innovation team. I'm very proud that people have alums now work for the government. And so we're kind of building the capacity to do this work in government. And so we support them on some projects, but ultimately the success story is when they themselves then do other projects using that infrastructure. So we've done that in California, where we did a project that had some impact and then they use the same infrastructure and methodology to reach another 3 million Californians who were on their caseload. So yes, the short answer is yes, we do that work. Oftentimes when governments wanna work with us, then they reach out and we go through a process of making sure that we have a value add to begin with, and then think through kind of capacity and alignment on work. But ultimately, if it answers a question that multiple governments have asked, so contributes to generalizable knowledge, and we feel like we have a value add, then that's exactly the kind of project we wanna take on.

- Thank you so much. I wanna say we have a participant ready to ask her question. Jane, would you please identify yourself, state your affiliation with the Kennedy School and ask your question?

- [Jane] I'm sorry, that was a mistake. I don't have my, I didn't mean to have my hand raised, but thank you so much for the presentation. It's fascinating.

- Thank you, Jane. That's lovely. I'm gonna move to a question that was submitted in the chat from Chuck Flax. Chuck has an MPP from 1992. And Chuck wants to know, he works with homeless people, and he wants to know, does your experience with more agency-focused language also apply to highly vulnerable populations like the chronically homeless? And do you have homeless-related experimental evidence that you could share?

- Yeah, thank you, Chuck. Those are excellent questions. I think we're still at the beginning as a research community of understanding kind of what messaging works best for whom. So we have some sense of like what works best in general, and then we're thinking through whether or not this applies more or less to different populations. We've been doing more work around housing and homelessness, but still in the prevention phase. So I mentioned the rental assistance work that we were doing. We're now doing a bunch of projects around how do you get more landlords to be interested and willing to rent to a Section 8 voucher holder? So as you know, there's a critical supply challenge that we don't have enough housing or enough landlords that are willing to rent. And so there, we're really focusing on nudging the person in power, the landlord, to change their perspectives on who Section 8 voucher holders are. And so we're doing some work there. We haven't done work that directly targets people who are currently experiencing homelessness, but a lot of the programs that we're doing in the housing space are prevention measures. So for example, just a recent conversation this week was how do we, when a landlord puts in a notice that they're going to evict, how can we make sure that person gets access to a bunch of resources that the cities already provide so that they are not evicted? So those are kind of the types of projects we're doing now. My sense is agency-focused language or de-stigmatizing language is particularly important for vulnerable populations. And so I'm really looking forward to doing more research in that space.

- Thank you for the question and for that answer. I wanna make sure that people know you can use the Zoom raise your hand feature. I talk with my hand, so I sometimes set that feature off, but please do if you have a question. In the meantime, I'll go ahead with another pre-submitted question. This one is from Jeff Zlanis. Jeff is an MC mid-career, sorry, MPA from 1992. And he wants to know how much of your research focuses on the systems that exist versus the people, both in government and citizens who use this system, use these systems?

- Yeah, I think this is a really, really important question. So part of the reason I say that I do public management first and then use insights from behavioral science to do public management work is because I think there's a lot of depth of knowledge around what it means to think about public sector systems as opposed to thinking of this as just kind of individual level challenges. So when we think about burnout, for example, a lot of my research suggests that this is not an individual level challenge. And so the solutions can't be individual level solutions. Similarly, in the work that we're doing around the social safety net, it is fundamentally a systemic challenge as opposed to an individual level challenge. That being said, part of the reason the People Lab is called the People Lab and the work that we do is that even when we say systems, because the work that we're doing is trying to be solutions oriented, it's very helpful to say, okay, how do we change a system? What does it mean to change a system? What does it mean to change an organizational culture? And so we try to think about those as kind of the day-to-day interactions that people have at work or the people in power who can change the law or the policy or can adjust the system and think of those as people who we can then either nudge or support in making those changes in the same way that we would think about a system. So it is certainly not the responsibility just of a person who is low income to figure out how to navigate a bad system, but by working directly with government partners and the people who do have opportunities to change those systems, I think we can kind of do both or try to address both, but I think it's a really important question.

- Thank you for that answer. I have another pre-submitted question. This one is from Tushar Prabhu. Tushar is a, I'm gonna goof this up, but he is, I think, a mid-career, I'm not sure what RP stands for, but he's a mid-career graduate from 1991, and he wants to know specifically about nudges and how you relate to that work. And he particularly had in mind the setting of default options that were implemented by David Cameron's group in the UK. And I'm wondering if you worked with them since you did work in the UK.

- Yeah, so David Cameron, the prime minister, started the first kind of large behavioral insights team within government, and then that team spun out and it's called the behavioral insights team. And I worked there in the UK and then launched the US team with Elspeth Kirkman. So I was head of research and evaluation for BIT North America. So, I mean, we can talk about nudges all day. So nudges are kind of a subset of behavioral science or behavioral economics interventions that really focus on light touch or low cost tweaks to existing programs or services or systems in ways that has a disproportionate impact. So when this first started, and you can imagine why I was particularly excited coming out of the Greek financial crisis, the promise or the idea was, look, we can make a lot of disproportionate changes even without spending an equivalent amount of money. And what we found over time is that is certainly true. So in a lot of my research, we think about kind of individual tweaks that have a disproportionately large impact. And then we've done some work with Stefano Della Vigna where we look across the board and we find that the average effect of a nudge intervention is statistically significant and positive. The size of that effect is about 8% to 10% increases. So depending on what your priors are, that's either a lot or a little. So if I tell you with essentially a zero cost intervention, you can improve your outcome by 8% to 10%, I think of that as a huge success story. Others, however, I think we're imagining much larger effects from nudges, especially people who hadn't run behavioral interventions themselves. So if you're imagining that you're going to kind of overcome poverty or change homelessness with a zero cost intervention, then I think the evidence suggests that that's not the case. And part of that is publication bias. So this is an academic problem that we're working on and publishing on, which is if you only publish success stories that have these huge effects, then people's beliefs about what a nudge can do is kind of distorted in that direction. So my view of the field is, there's actually quite a lot we can do on the margin with nudges to improve people's lives today. The effects are rigorously tested and when they are rigorously tested are real, but we're not going to kind of change the world without going beyond just individual nudges. And so behavioral science is much bigger than just a nudge and behavioral economics certainly is much bigger than just a nudge.

- I wonder if we could get more specific with some of your work and in particular, I wanted to ask you about some work you did on financial aid and this is partly to make a little bit of a bridge between our earlier Wiener Conference Call with David Deming who spoke a lot about this issue and you have some really wonderful evidence as well about how messaging can really result in exactly vulnerable population making better use of a really important resource in financial aid. So if you could speak to that.

- Yeah, absolutely. So this is an area, in the administrative burden framework, it's like a very typical area. Anyone who's tried to fill out a FAFSA either for themselves or for their kids understands the idea of like these huge burdens that we put on people to access benefits that they're eligible for. There's three ways that we can think about this. So in my work with Jesse Rothstein, Prakash Reddy and others, we look at how do we get high school students to engage with the California financial aid system. So a lot of money has been spent on increasing financial aid and then one of the barriers is you need people to actually register to use it. And so what we find is light touch interventions or changes to how you even share that information with people does increase engagement. Telling people that they belong in college seems to be very effective. Just simplifying the language seems to be very effective. But just providing information alone isn't enough to overcome all the other barriers associated with accessing financial aid, the compliance hurdles that go beyond that. So I think of these kind of light touch nudges as doing exactly what it says on the box, right? We can get people to go to the website and learn more and we can increase that significantly. But there are other hurdles on documentation and financial aid access down the line that we also need to adjust if we wanna see the big effects that we wanna see. And other researchers have found, when you start tackling some of those bigger issues, large effects. So like Sue Donarski, for example, has done some really cool work where if you guarantee for people who are gonna be guaranteed financial aid early on, that's gonna increase college enrollment. Earlier work shows that if you help people fill out the FAFSA when they're going to do their taxes, a TurboTax that also increases college enrollment. So helping with those compliance hurdles can be really, really important as well in getting people across that finish line.

- That's so interesting. Thank you. I wanna turn over and invite a question from Nancy Zwang. Nancy is an MPP 1978. And Nancy, we will unmute you.

- [Nancy] Yes, you did. Thank you very much. Back on one of your buckets, which was increasing trust in government, the programs it looks like that you targeted on, which were very worthy to help low-income people address some programs that they may not realize they can have access to. But that's still a small portion of the population. What about working on programs that kind of the regular everyday person interacts with government, whether that's the DMV, for example, or to do things to help people get a better experience in general. The IRS is another, right? And that might go a long way to trying to improve people's trust in government, which then would have spillover effects for other situations that you're working on it. So have you ever thought about trying to do more mass market state resident type interactions?

- Yeah, you're speaking my love language, Nancy. So it's funny when I started this work and I said, I really care about the operations of government. People would say like, yeah, okay. But like the real thinkers think about policy design. And I'm like, no, I'm thinking about the line at the DMV. That's what I wanna fix, that experience. So you're absolutely right. The theory suggests that if you improve overall kind of performance of government, that should increase trust and that should have subsequent effects. Like if I have a good interaction at the DMV, what does that say about whether or not I pay my taxes or register to vote or kind of have other positive engagements with government? There isn't actually a lot of work yet to make all those links. But for example, we're doing some work right now with the chief engagement officer of New York City that kind of speaks to your point where New York is really a pioneer in the US and thinking about a trust score as kind of a fundamental performance metric for the city as a whole. So if New Yorkers trust the government, what does that mean overall? If as like a wellbeing performance metric. And so we're about to pilot the first rounds of that. And the questions that we're asking are essentially trying to get at broader, like New York wide, New Yorker wide beliefs about whether they trust the government. And we think about trust in a few ways. So one is just competence. Do I trust that the government is like good at doing what they need to do? The other is around whether or not they care, like benevolence. Do they care about me and my community and wanna do the right thing? And then the third is around kind of honesty and integrity. So does the government, you know, keep its commitments in some fundamental way? And so our hope is that we're gonna have these trust scores that we can then map onto experiences with government. And we're thinking about it both in terms of performance. Like if you had a good experience when you called 311, how does that impact your trust score? But also if you know more about how hard the government is trying on things like community engagement or to improve services, does that in itself increase trust even if we haven't gotten it right yet? So a lot of governments right now are trying to do community engagement better. Most people don't know about those efforts. If you were to know about those efforts, does that increase your trust and your willingness to engage? So that's what we'll be testing in New York soon. But I think it's a really, really critical point.

- Absolutely. I have a final question for you. And this takes us back to one of your first areas and I know a very, very continuing, longstanding area of interest, which is on workforce. And in particular, I wanna ask you about your research diversifying the police, which is an area of huge concern and a lot of effort and strategies that you've tested to attract more diverse candidates and also to help them navigate so they actually not just express interest, but as you said, apply. And what was effective and what didn't work?

- Yeah, thank you for that question. So when I first started doing work around workforce, this was an urgent issue. It's still an urgent issue. As you note, how do we diversify the police? How do we diversify law enforcement more broadly by increasing the number and type of person who applies to be an officer? And so I started work in Chattanooga, Tennessee, but now we've done work in over 20 jurisdictions across the US with the Behavioral Insights Team and others to really test what's the kind of language that would encourage someone who hasn't considered a job in policing to consider a job in policing. And what we find is that the kind of typical messages around public service motivation, come serve your community, come serve, aren't particularly effective. And there's many reasons for that. My sense is that government has kind of lost its monopoly on impact. If you wanna make a difference in the world, you're basically, and a young person, you're basically told you can do that in the nonprofit sector, you can do it in tech, you can do it in the public sector, you can do it in social enterprise. So it's not really kind of a message that disproportionately gets people to think about government jobs anymore. What we do find in the policing context, and now we've seen in other places as well, is that telling people that the job is hard, that it's a challenging job, and telling people that this is a long-term career triples the likelihood that people apply and quadruples it for people of color and women. And so we do see that there are other messages that work, but we have to really be testing kind of even our own perceptions about why someone would join policing. Because if you ask people who are currently, police officers or frontline workers, why did you join? They will say correctly, because I wanted to make a difference. I joined government because I wanted to make a difference. But that doesn't mean that that's the right way to get new and different people who are not already applying to consider these jobs. And so we see that in policing, and we're seeing some interesting patterns in general with recent college graduates about government. So a challenge message is effective. A message around systemic change is effective, goes back to one of the questions where if you want a kind of more diverse applicant pool for government positions with a new generation, a message that says, if you care about systems change, come work for government, that seems to be particularly effective.

- Such important work. Well, thank you so much, Professor Linos. And thank you to everyone who joined this Wiener Conference Call. We really appreciate it. Our next conference call will be on December 6th and feature Dr. Larry Summers, who is the Frank and Denie Weil Director of the Mossavar-Rahmani Center for Business and Government at Harvard Kennedy School. And he's also the Charles W. Elliott University Professor, as well as a former president of Harvard University and a former US Secretary of the Treasury. So we look forward to having you back with us in December. Elizabeth, thank you again.

- Thanks so much for having me and thanks for the questions.

- Thank you. Bye-bye all.