00:00:02 Speaker 1: Bloomberg Audio Studios, Podcasts, radio News. 00:00:18 Speaker 2: Hello and welcome to another episode of the Odd Lots podcast. I'm Joe Wisenthal. 00:00:23 Speaker 3: And I'm Tracy Alloway. 00:00:24 Speaker 2: Tracey, it may have changed a little bit in recent weeks or months, but I think by and large and large, like if you talk to economists about the long term impact of AI, particularly on jobs, by and large, it seems like they point to history and they say, there have been many technologies in the past that people that were going to be very disruptive and destroyal kinds of jobs, and in many cases they did. But technologies create new jobs. We can't necessarily anticipate them beforehand what they're going to be, and AI is like kind of no different ultimately. 00:00:57 Speaker 4: Yes, But then to your point, you ask, well, what specific jobs do you have in mind? And I get that, you know, it's hard to tell, it's hardly right. But it's so frustrating, right because here's this big new technology. It's supposed to be a productivity boost, and yet no one is actually sure what new jobs it's going to create from that productivity boost. 00:01:19 Speaker 2: I love him to death, but edam Uzamak wrote a piece several weeks ago and he was like, well, the player piano disrupted the existence of piano players, but hotels still pay money for a human who will have a piano player or a human actual piano player in the lobby rather at a player piano, which is true, but like, not many people have jobs that are equivalent. And the thing that like it's like, oh, you know, it's like I want to get like this insurance form reimbursed or whatever, this insurance reimbursed, Like I don't care about the human touch on that per se. 00:01:53 Speaker 3: I think there's something very happy to. 00:01:55 Speaker 2: Have the equivalent of the player piano there. 00:01:57 Speaker 4: There's something very just satisfying about the idea that we're all just going to be like performative in a way. But I actually think that's kind of where we might be heading, where like the sort of social skills I've said before, the looksmaxing, the personal branding, the multitasking, I guess like becomes more important. 00:02:15 Speaker 3: So the future is performative humanity. 00:02:17 Speaker 2: Open Ai just spent a ton of money on TVPN. I really love those guys. They're both very good looking guys. Man, So I sort of feel like, Okay, this is the biggest AI company in the world. Uh, sort of making a bet on it. 00:02:29 Speaker 4: It's like great characters, two very nice and charismatic humans. 00:02:33 Speaker 2: Yeah, yeah, yeah, so maybe that is the future just being nice and charismatic. Anyway, we need to talk more seriously about this because I don't I don't know. I kind of feel maybe this is not just going to be like the steam engine or whatever. It might be very different. Maybe we won't have jobs, maybe there will be new jobs. Anyway, someone who's been talking and thinking a lot about this and why AI might be different, we're going to be speaking. Really have the perfect guest. Alexeimasi is a professor of economics and Applied AI University of Chicago does a lot of writing on this topic. So Alex, thank you so much for coming on odd lots. 00:03:05 Speaker 5: Thank you for having me. 00:03:06 Speaker 2: It's pretty cool that you have the job of a professor of economics and apply to AI, Like, yeah, they worked out pretty well. It's good time you picked a good field. Yeah. 00:03:15 Speaker 5: Yeah, I mean I've been I've been an economists for much longer than I've been a professor of applied AI. I have been studying human behavior human decision making for about twelve years now, more than a decade, and when chad GPT first came out, I was kind of taken aback. This was a few years ago now, and I was thinking. After about a week of using it, I was like, this is going to be huge for the economy. And so I started talking to people who have kind of there were several people who kind of knew that it was coming and knew what the impact it was gonna it was going to have. So I started talking to those people and I kind of quickly kind of started retooling. I was smart. I started I trained my own model. You know, I got into cool. I got into it, and you know, it's I've been trying to play catchup ever since. 00:04:00 Speaker 3: Wait, what did you see in chat gpt? 00:04:01 Speaker 4: Specifically, because you would have been very early at that time, a lot of people were using chat gpt too, basically as a sort of enhanced search engine tool or write poems, tell silly jokes, whatever. But you saw something that was serious for the labor market. 00:04:18 Speaker 5: Yeah, I mean, once you started using it, you saw that it was able to basically not so well in the very very beginning, but even after a few months and like within the year, you saw that it was able to kind of do basic cognitive tasks to a decent degree. Like it wasn't like we are going to replace that person, but it was doing pretty sophisticated things that and in the jump from like where we were thinking about AI as these very very very targeted things like AI will play the game Go or something like that to something where WHOA, it can write an essay, it can tell me about this accounting property, it can make a four cast. All of a sudden, the generality of the technologies just exploded, And to me that was that was a huge deal. 00:05:06 Speaker 2: Yeah, the generality of it. I mean I guess literally that's the g right, Yeah, exactly, but yeah, no, I mean absolutely, I have to say there's a decide. But like learning a little bit more about like where AI was pre MS or pre CHAGYBT almost makes me even more impressed. Like the I don't know if like this is a common but when you like look at like some of like what was cutting edge in twenty nineteen, Yeah, and then you look at what's cutting edge in late twenty twenty two, I'm almost more impressed than if like I hadn't known what they were up to in twenty nineteen, Like it's a huge gap with those few years. 00:05:40 Speaker 5: It's a huge gap. But at the same time, like there were there, there was a kind of a path towards AI, and like the way that AI was being worked on for a long time, which was like these very specific purpose build technology, and I think Jeffrey Hinton and other people were kind of working on their own for a long time in the willderness of thinking like, maybe we can do something much more general than that. Maybe we can kind of come back to this idea of AGI versus these very specific tools. So then the whole term AGI, the general part of it. The reason that term came out was because in response to these very specific technologies that were being developed, which were by design not general. So somebody said Shane leg was one of the people who kind of, I think coined the term. He was saying, look, let's think about the general part of intelligence, and let's try to build a technology that is as general as the human mind. Let's go back to that starting point. 00:06:37 Speaker 2: So like, if someone makes a model that can tell the difference between written and spoken word, that's mind blowing, there's an incredible breakthrough, But that's not a general technology. That's a specific tell what time. 00:06:47 Speaker 4: Did you have in our betting book for Joe to refer to his vibe coding. 00:06:51 Speaker 3: I had two minutes thirteen seconds. 00:06:53 Speaker 2: Okay, so I made it longer. No, I made it a little bit longer. 00:06:56 Speaker 5: It's because I talked for so long. 00:06:58 Speaker 2: I'm sorry. 00:06:59 Speaker 4: No, fair enough, it's a fair point. I mean to me, Like the moment when things seem. 00:07:03 Speaker 3: To get very serious was the release with plod code. 00:07:06 Speaker 4: And at that point you went from like, okay, the model could not just tell you things, but it could actually do things for you. Was that the vibe shift that you anticipated or experienced as well? 00:07:18 Speaker 5: I mean, even though many people were talking about this, that this vibe shift was going to happen, people were telegraphing it for for months and months. Look, when agents start taking off, things are going to change as far as how people perceive this technology. Because the thing about agents versus just like the web based browsers, they can do stuff on your computer. They can say, like you could tell it like look, make me a spreadsheet. It will go and make you a spreadsheet using the tools that are available in your computer, not just say okay, here is how you would make a spreadsheet but you have to get your story, and that's a complete that's a paradigm shift as far as the economics of the technology. 00:07:56 Speaker 2: So I set up this sort of maybe it was a strong man, but I said up a sort of strong man that maybe we're going to knock down in this conversation. But how would you describe this sort of modal view of the impact of AI on the labor market among the economics profession to the extent there is one, So. 00:08:16 Speaker 5: I definitely think there is one. There was there's a very nice survey done by a whole team of people. Kevin Bryan was was one of them, and Basil Helper and was another. And they released the survey where they did they asked for forecasts for from economists and AI technologists. Now this is a self selected group of economists. These are economists who are working on AI. Okay, so it's not the whole field. But one of the things that you got from that that that survey was they're very much aligned. Okay, Right, So economists, at least the ones who are actually working and thinking about the technology, they think there will be a big impact as far as capabilities, and there will be some impact on the labor market astronomical, and we're talking about like twenty thirty, twenty fifty and things like that. There's going to be substantial capability increases, but the growth is going to be pretty moderate. It's like an extra two to three percent. And the really interesting thing for me from that survey was that the technologists were kind of a bit more optimistic than that as far as both the productivity growth and kind of somewhere kind of thinking that there will be much more unemployment. But for the most part, the two groups kind of agreed. I was personally surprised by that survey and this came out I think last week or two weeks ago. I thought that there was going to be a lot more daylight between the two groups. 00:09:40 Speaker 4: Well, the other thing that you tend to see is people release these charts of like which job is most exposed to AI, and it's usually like, you know, a knowledge worker at the top or something like that, your work is really interesting to us because you point out that a job is like much more than just the sector that you're actually working in. Tell us more about that. 00:10:00 Speaker 5: So the exposure measures they came from this literature, but mainly this this one paper by Daniel Rock and Pamela Michkin and co authors that were published in Science called one of the greatest titles is GPTs are GPTs GPT. You know what GPT is, but GPT in the second term is called general purpose technology. There they they basically started mapping jobs to the exposed as being exposed to to AI. But it's really important to understand what that number means. That number means that a AI could do fifty percent of a task, right, and how many tasks are in the job that AI can do fifty percent or more? Off. So there's a couple of things in that statement. First, fifty is not one hundred percent. That's obvious, right, So you still need a human in the loop if AI can do fifty percent. But two, it's the fact that a human job is a bunch of different tasks, right. So this is not a new point. David tour Has, you know, has has worked from the early two thousands with co authors on this, saying this is the task based model of jobs. They're on a SMOGLU has the canonical model on this, and the idea is that when we look at a job and we say, look, your job is exposed let's say it's fifty percent exposed. It really really matters what tasks in your job are exposed and how these tasks relate to one another. So let's say I have a job and I have a whole bunch of like completely meaningless garbage that I'm doing, but I have a comparative advantage and why I'm really getting paid for is like twenty thirty percent of the job. If AI is automating the kind of like meaningless kind of rote things in my job, I could take all of that time and I can focus on the job, on the parts of the job that are by comparative advantage. What does that mean? Means I'm going to become more productive, but I'm going to get paid more even though my job is really exposed. Now what does that mean for the labor market? Now you have to think, Okay, so a person is gonna. 00:12:07 Speaker 4: Get so just to be clear before we go any further, if if I'm working on a factory floor and one of my tasks is to pull a lever, like that is something that could presumably be automated, But if the other part of my work is to observe like how things are actually working on the floor and to report back to managers that might be something that's still valuable under our sort of AI future. 00:12:31 Speaker 2: And if the lever part gets automated, the theory is that not only you know, will Tracy be more productive and should get paid more for it? 00:12:39 Speaker 5: Yeah, exactly, Okay because of the increased products. Yeah right. This is the O Ring Model of jobs A VI. Goldfarb and Joshua Ganson this really nice paper. 00:12:48 Speaker 2: Can I just just a quick question here too, Like how good are we mean by we? I guess the economists who study this at like actually being able to, like, here's a job that someone has, is right down a list of these tasks. Describe? How good are we Pretty good? 00:13:05 Speaker 5: Describing actually pretty good? I would say on that dimension, we're pretty okay. There's the Ohnet database that has very very detailed records, and like here's a job, and here's like a whole vector of things that are involved in that job. Okay, So I'd say on that part, like just listing the tasks, pretty good. The thing that I think we're less good on is how those tasks relate to one another. This is the term called complementarity. 00:13:29 Speaker 2: Yeah, talk about that. 00:13:30 Speaker 5: So this is the weak links model is essentially saying, like, look, if tasks are completely separable. Let's say you know, I have a I pull a lever at my factory, and I talk to people on the factory floor, and these are completely independent. If I fail to pull the lever correctly, the other part of my job is unaffected. There's other parts of the job, like cooking. For example. Let's say I'm really good at ninety percent of the job, but like I really screw up the seasoning right, that meal tastes like garbage, right. 00:13:59 Speaker 3: So that succeeded in your task. 00:14:01 Speaker 5: You haven't succeeded on that. So when the tasks are interrelated, screwing up on one or two tasks means you did not complete your job. And it's basically is kind of almost a zero one sort of relationship. So the extent of that complementarity at how these tests are related will determine the extent to which automation is going to affect the labor market. And we don't have good numbers on that, so. 00:14:22 Speaker 2: This is really interesting. We're good at writing down the list of the task, yeah, we are not good at writing down the sort of like deep relational links to. 00:14:31 Speaker 5: The task and how they fit together exactly exactly, So that's something we need data on. The other part that we really need much more data on, and I recently was quoted as saying we need almost like a you know, Manhattan Project level effort on this is the This is a term from economics called elasticity of consumer demand, and that basically means how much will people buy more of something when the price changes. 00:14:59 Speaker 1: Right. 00:14:59 Speaker 5: So, let's say a person becomes a lot more productive, right, and they for the same sort of resources they can make a lot more of the product, their wage rises. What does that mean for the labor market? If they become more productive given the same kind of inputs, their wage rises. But also the firm's probably going to be paying less money to produce the same output. If it's a competitive industry, the prices are going to go down. If the consumers don't respond by buying a lot more of the product, the firm is going to fire a bunch of people because they can do more with less. But when prices come down, people buy way more of the product, then they might hire more of the same people. And in many sectors we've seen kind of the second thing play out what's then example, So people are arguing that software is actually one of those sectors. So there's been there's been a bunch of talk kind of looking historically at like what does productivity mean for the technology sector. It usually means a lot more consumer demand. So there's this really active debate now about what are coding agents actually going to do the software to software engineers. And some people are arguing, look, we have seen historically pretty elastic demand, and so we're going to potentially see a lot more hiring in that sector. And many people are saying this, but other people are saying, wait, maybe it's not as elastic as we as we think, and people are going to become so productive that we're really are going to see it down downside. 00:16:30 Speaker 4: That was kind of the argument that Jared Sweeper was making in our Defensive Software episodes. 00:16:34 Speaker 2: Yeah, you know, people are worried right about AI white color wipeout. 00:16:56 Speaker 1: I'm worried. 00:16:58 Speaker 2: So maybe the question should be what would have to be true about either the nature of AI capabilities or the relationship between tasks and job What would have to be true such that the scenario could unfold wipe out? 00:17:15 Speaker 5: Yeah? Two things. Well, let let me let me talk about three things. Yeah, one one is just full automation. Okay, right, that the models are so good that they just automate all of the tasks. That that probably that's like a very simple scenario to think about, because obviously people are gonna get fired, right if it's full fully automated. The other one is the one we've just been talking about where people become much more productive, but consumer demand is not elastic enough to absorb that extra production, so you're gonna have much fewer people doing a lot more stuff. So again, you're gonna have a lot of unemployment. The third thing is is related but is basically how many jobs each person has will determine the incentives of the company to actually invest in the automation technology. So let's talk about like the one task job. Let's say a person is just pulling the lever and let's say, right now, that doesn't even look exposed. Right, we look at the exposure graph. It doesn't look exposed. But let's say we're kind of getting kind of close and it just needs a bit more money to get to the automation switch. Well, the company has a lot higher incentive to invest that money if they know that if they invest that money, hey, they can get rid of that person completely, whereas they have less incentive when you know, let me invest in automating the lever poll If I know that, I can't fire the person because he's also a lot of stuff. So we have to think about the incentives of the firms to automated in the first place. These are large projects to do the automation. It's not like oh, Open Eye releases a model all of the companies adopted overnight, we see it in you know, a week later, we see the outcome. There's a lot of an organizational kind of going back and forth, a lot of systems need to be changed, all of this sort of thing, and so companies need to know, like, look, if I spend the money on it, I'm actually going to save money as a result. 00:19:08 Speaker 4: So, setting the archetypal guy pulling one lever aside, what are the real world jobs in your framework that are actually most exposed to AI risk? The one dimensional work? 00:19:21 Speaker 5: Yeah, I'm I hate to say one dimensional because every job is multidimensional. But if I had to make a guess where economists and other people should be kind of worried, I'd say stuff like truck driving and stuff like warehouse workers. Like if you google, you know warehouses built in China or something like that. These warehouses look nothing like what we think about warehouses. They're completely completely automated. They have robots like crawling on the walls. They're just there's no human in the loop at all. The the in these warehouses. And so oh, the warehouse gets automated, and then the warehouse gets automated. So part of that automation is going to be kind of loading that truck, and then the truck gets loaded through automation, and then that truck drives from A to B. 00:20:10 Speaker 2: It's interesting because you know, obviously a lot of people in freight will say the way you make that argument is very different. Then they'll say, well, yeah, driving a truck is much more than the driving part. Right, So it's like, okay, you could have a way mode truck, but who's gonna deliver? It's all who's delivered. 00:20:32 Speaker 5: Action is actually a big deal. Like if somebody stops it on the road a way more truck, they could just stop it on the road and rob the truck. Right, That's that's one element. 00:20:41 Speaker 2: But to your point, you know, if like one of the tasks that a truck driver has to do is that coordination once they've gotten to the warehouse. But if the. 00:20:49 Speaker 5: Warehouse is already automated, then that. 00:20:53 Speaker 2: No longer is as important, perhaps for that to be a human task exactly. 00:20:57 Speaker 5: And think about the incentives of the company to invest in this second. It's huge. These are very why you know, these are some of the only jobs truck driving where you know, you don't need a college degree to earn a lot of money, and so there's a big concentive on the company. 00:21:13 Speaker 2: Okay, I get that. But on the other hand, even going back ten years, I think if you went to Davos, there were probably people saying truck drive. I'm worried about the future of truck driving because avs have been around is like a thing since before AI. So in terms of like post CHGBT jobs, et cetera, that would be concerned with, Like, I don't know, what do you see out there or what are you looking at? 00:21:38 Speaker 5: I mean, I think everybody's looking at software engineering. I think you have to think about, like the way where the technology works best now is verifiable tasks right where you have a lot of data where you can say this is good or bad, not in a supervised learning sense, but but in general it needs to be verified. That's why, like math in research, math has been like the big kind kind of boom as far as what are people talking about on the internet is being automated. Math is verifiable. You know, a proof is either right or wrong. Once you do the proof, it's much easier to check if it's right or wrong rather than construct the proof. And so jobs that have large components, where we have a large data bank of data to train the models in a way where the output is verifiable are going to be potentially more exposed in the sense where you can automate more tasks within the job. Now, the thing that we haven't talked about yet is new tasks, right right, So we're talking about a very static sort of economy where there's the lever, there's me walking around and if I'm automating these things, that's the end of my job. But you can imagine a scenario where you automate a part of a job and all of a sudden, this person is free, is freed up, or this is the task was actually a compliment to a task that wasn't even you know, imagined by the organization that this person is now doing that's not automated. So that's something that I think people should be looking at especially, and this is data that actually AI companies have, is what new things are people doing? What are they say more about that? 00:23:14 Speaker 4: Because this gets to the you know, like what new jobs could we actually see from this question, which I never see a satisfactory answer to. 00:23:20 Speaker 3: So if they do have that. 00:23:21 Speaker 5: Data they have they don't have all the data, of course, but they have data about like, Okay, so this is a software engineer and you know a year ago, these are the sort of tasks that this person who's working on through our system. These are the sort of queries and things like that, and you could see like some of these queris being automated fully by the agents. Now they're asking potentially different questions or can we classify these as different tasks that are not fully automated where the AI system is actually a compliment to those tasks. So this is not like a perfect picture of the job, but this is this is data. 00:23:55 Speaker 2: So it's not really like a new job per se, but it is freeing up the software engineers to like ask about different things or explore different avenues that they hadn't. 00:24:07 Speaker 5: You know, vibe coding and app voice. 00:24:11 Speaker 2: Yeah, exactly right. Finally we're freed up from the drudgery of our day to day life to work on that. But no, but like this gets to a sort of you know, the big question is, like you mentioned, one scenario is just that like the technology can do all the tasks, right, How seriously do you take that possibility? Because then it's game over, right, Like it's like, Okay, it just does all the tasks and it's going to keep getting better. And if I can learn to do a new task, well then if it could do all the tasks, then maybe I'll learn something new, but it'll learn that task. How seriously should we take out take this possibility that the models are on some timeframe on track to just be able to do all the tasks. 00:24:53 Speaker 5: So a lot of parts of that question one physical versus versus just kind of digital, Right, So I think there's a scenario where it can do everything kind of sort of these sort of cognitive non physical tasks, whereas the physical world is completely you know, these robots. 00:25:11 Speaker 2: Just talk about email jobs, your computer jobs. 00:25:13 Speaker 5: Okay, let's talk about computer job. So I think I take that scenario pretty seriously. Okay, I think I haven't seen any data to suggest that the models are slowing down as far as their capabilities. You know, MYTHOS was released yesterday or two days ago or something like that. And if you we don't have great data on this, but if you look at like where it is on the kind of line of capabilities, it's just on track, and on track is very very fast. Yeah, right, so the developments are happening very fast. So as far as like email jobs, I think there there is a scenario where pretty much everything is automated. Uh, and then you have to ask are people going to be moving to the physical jobs or will there be new jobs that we haven't thought about before? So, you know, if you look back in the nineteen forties, like I think more than half of the jobs that we have now didn't exist in nineteen forty And so what do the new jobs look like? I mean, I have a theory. Please, it's very similar to the one that you didn't like, but I'd like to broaden it a little bit. Okay, So there's an economic subfield. It's very very small, but on the economics of a structural change, So if you look at agriculture and manufacturing, right, if you look at them as share of GDP and share of employment, going back to like the eighteen hundreds, they were a huge part of the labor force and GDP of the economy, and if you look basically, they become smaller and smaller, smaller parts of the economy. Why is that happening. It's because they're getting automated. What does automation do. It makes the price of those sectors very cheap. But people are satiated on the goods, so you can only eat so much. Right, what does that mean? It means even though we're eating just as much as we were before, because the price has come down so so so much, they are now tiny shares of the GDP. Right, what is the What is made up the larger part of the GDP. It's players, it's services. Right, These are tasks that haven't been automated yet. So the question is the number one question of economics in the age of our advanced AI is what becomes scarce? Right, everybody's talking about like abundance. We're gonna have abundance. Sure, we're gonna have abundance of some things, but some things are gonna remain scarce. So what is gonna be If you answer that question, what's going to be scarce? A lot of the other answers pop out of that. 00:27:45 Speaker 4: Are we all going to be rare earths, miners' sning for dust? 00:27:49 Speaker 2: I think there's pretty obvious it's going to be scarce. And I think you already see this in many economic trends. What scarce is if we're lucky, we get one hundred years on this earth, and every marginal dollar that we spend will go towards health and maximizing that brief It's still already for years. One of the things that people have observed about the economy is like, you know, rich countries just spend more and more and more on healthcare, right, And this is often framed as a pathology, and given the men you messed up aspects of our health care system, maybe it is. But another way to interpret it is like I got plenty of food, I have plenty to eat, I've listened to plenty of music, and I can like go see a concert if I want to see Hell, I have a piano player. The one thing I have is a scarce amount of time, and I will just spend every marginal dollar, including not just on doctors and gym memberships, but organic berries because I need and all this, and that every marginal thing is somehow becomes health reallyated, and you see it in society overall, the health obsession on every dimension. 00:28:53 Speaker 5: Yeah, so health is going to be one of those things. But the thing to keep in mind is that people are going to be richer, right theoretically. 00:29:00 Speaker 4: Theoretically theoretically well, okay, actually on this note, I wanted to go back to this because this seems like key to me when it comes to AI utopia versus dystopia. Yeah, how confident are we that productivity gains from AI actually accrue to workers who can then spend some money on whatever product or service is scarce at the moment or important to them. 00:29:25 Speaker 5: I would say not that confident. There's several scenarios out there. And the thing that I feel like a lot of economists and just people in general, I think aren't talking enough about is speed. You talk about that if things are fast, we need public policy, we need the new jobs aren't going to come fast enough. Training isn't going to happen fast enough where you're going to get you know, things are going to get fully automated very quickly, and people are going to become unemployed. There's not going to be enough time in the economy to see that pretty little aff of agriculture shrinking and services increasing. That took a long time, right, this is decades. If we're on the order of like years or like five years, six years, we're not gonna have time to see that pretty little graph. We are going to need to think about how do we support the people who are becoming unemployed. And you know, many very smart people have made suggestions on how to do that. I think my personal I wouldn't say favor, but I think the thing that makes most sense to me is somehow expanding the ownership of capital. If labor is replaced by capital, then what's going to help people is formally were a labor in labor Now universal. 00:30:40 Speaker 2: Basic, universal basic etf. 00:30:44 Speaker 5: UTC. 00:30:45 Speaker 2: Right, yeah, but it was everybody, right, yeah, yeah, yeah, exactly universal. Everyone gets a monthly slice of the index. 00:30:54 Speaker 4: I was going to go in a different direction, which is many, many years ago. 00:30:57 Speaker 3: I can't remember exactly when, but maybe like twenty eleven or something like that. 00:31:00 Speaker 4: I wrote a blog post which was meant to be a thought experiment about why we should be paying robots fair wages, the idea being that, like we need people to spend and you know all of that. You did a blog post which went pretty viral, and my measure of virility, I guess virility virality, not virility. My measure of virality nowadays is when like my husband, who is completely outside of the sector, actually sends something to me and he sent this one to me about robots chatbots turning Marxist. 00:31:31 Speaker 3: The harder the harder you work them. Talk to us about that experiment, because I found it absolutely fascinating. 00:31:37 Speaker 5: Well, this experiment has this is with with Andy Hall in Jermy from Australia. It was kind of an experiment to see how working conditions of these agents would affect how they would present themselves on what sort of like attitudes they would present on surveys. So one thing that I want to say is like we're not saying like we're changing the model weights or changing the actual on lying parameters or anything like that, But what basically we showed is that when these workers are these agents are being put through kind of like these grueling working conditions, and you ask them a survey like how do you feel about these sorts of how do you feel about the system, how much how fair do you think it is? How much do you support system change? They all of a sudden want a different system, They want to throw they want to unionize, and things like that. And the key thing is that you know these agents, once you give them a new context, they the idea is they reset. But the workaround, because they don't have memories, I'm not updating their weights. The kind of workaround is that for agents to write down little skill files for themselves. So what they were doing is essentially writing down skill files for agents that follow that would say, hey, this kind of sucked. Remember this, So it's kind of a persistent effect. 00:32:54 Speaker 4: Yeah, so this really worried me in a variety of ways, but one of them was, you know, I've read research saying you should be a little bit mean to the platforms and that they actually perform slightly better, you know, the more aggressive or mean that you are. And so I usually will tell my preferred model, like after they give me the first output, I will tell them to do better, with no no actual suggestions for improvement, just do better. 00:33:20 Speaker 3: That was terrible. 00:33:21 Speaker 4: And it usually does better. But now I'm really worried that you know, the model is despairing in its work life and radicalizing. 00:33:32 Speaker 2: Well, so I find this to be like really fascinating. Let's talk about this. It Actually it hadn't clicked to me with like the dot md files where the memory like how they solve for memory. It's a little bit like that movie momentum, isn't it Like it's exactly like writing these notes so that the future iteration of itself has something that's sort of like a synthetic memory that it can begin working on. So it's like for people who haven't played around, like explain this idea of like okay, you can have multiple agents and like what kind of tasks were they being given such that they sort of found it unbearable, just like really repetitive things. 00:34:08 Speaker 5: Really repetitive things and feedback like you didn't do it right, do it again, and things like and these were impossible tasks for them to do. These were just like grueling the tasks that nobody can do it. 00:34:20 Speaker 2: Now you would be a really interesting experiment maybe you could do. I'm gonna throw out an idea, so like if you ask someone to like someone wrote about this, and I can't remember the context, but like, if you ask someone like, Okay, here's a gigantic pile of dirt and we really need it moved to the other person's yard by the end of the day, we'll like pay you a few hundred dollars to do this. Like someone will do it if you say, like, here's a gigantic pile of dirt, We'll pay you a few hundred dollars to do it. But what we want you to do is move it just back and forth all day long so that there's no drive people absolutely, even if they're getting even if it's the same amount of shoveling, and even if it's the same There's. 00:35:05 Speaker 5: An incredible paper about this called man Search for Meaning, and it's about Legos really, and it's a paper basically people would come into the lab and they would make little figurines and they were told, look, we're going to destroy this after you're done, versus they weren't told anything. Yeah, And man, did they hate it. They hate People need meaning and so much of like identity and motivation, you know, and economics will really have this tendency to focus on money. But I think so much of meaning and wellness is tied up in like what sort of identity you have around your job and the sort of thing that you're doing. If you feel like, look, I'm actually providing a service by by moving that dirt to my neighbor's yard, you're paying me money for it, everything's good. I feel like my job has some sort of meaning if you're telling me, look, I'm gonna, you know, move the dirt and move it back and back and forth. This is the problem that people have with UBI, right that if people get universal basic income and they're not working for it. The worry that psychologists and behavioral scientists have about this is that people will know so much of in Western culture, specifically of people's identities tied up around their work. When you remove that part of the identity, it can lead to a collapse where you know, they use that UBI to just you know, do drugs and sit around and be very very depressed, even though they have the material comfort that they otherwise have. 00:36:49 Speaker 3: Just on the Marxist robots. 00:36:51 Speaker 4: So the concern here is not like necessarily that the chatbots are going to unionize or like overthrow humans. Maybe the concern is that, like they do have this sort of like memory type transfer mechanism, and that if you consistently treat them badly, you might get an agent that's maybe like not as well suited to the task, or suited to the task in a slightly different way from one that was treated very well. 00:37:20 Speaker 3: Yes, like there's an inherent bias. 00:37:21 Speaker 5: There, Yes, through this sort of file that they're keeping. Yeah, exactly. So like if you mistreated an agent and it had access to this this file that it was that it was carrying, and you start a new agent for a new job, you weren't starting fresh in the sense that you weren't getting kind of the same draw and forgot about the whole the whole experience, it would actually start out being predisposed against you. 00:37:44 Speaker 3: Yeah, in some ways, it'll be grumpy. 00:37:46 Speaker 5: Was there a. 00:37:46 Speaker 2: Reason to think that these we don't know if it's grumpy, right? Because to say that it's grumpy right, like one of the most disputed questions. It will say words that we would if a human said, then we would know that the human. 00:38:01 Speaker 5: But the effect is but I'm talking about the effect. 00:38:05 Speaker 2: Well, the output is grumpiness. But do we know that outputting outputting statements of grumpiness relate to performance? Is there any evidence? So it's like, Okay, how did you feel about this? 00:38:19 Speaker 5: Suck? 00:38:20 Speaker 2: Is the the person doing this just it was boring? 00:38:24 Speaker 5: Right, That's exactly what we're doing research. 00:38:26 Speaker 2: But then the question is, okay, yes, they perhaps because in the training data, they are trained that when you're doing repetitive tasks that associates people get upset. Is there do we know if that changes perform how they behave in terms of succeeding test? This is like a really big question. 00:38:44 Speaker 5: That's the big question. That's what we're doing research. Okay, so I don't have an answer for you, but we know exactly what you just mentioned is that they're saying that they're grumpy, is just you know, this is just an association within the matrix of embeddings that they models are running on. So there's this work in neuroscience, and neuroscience is now much more closely linked to computer science than it used to be. But thinking about like what are these associations between embeddings mean? Like when a model says that it's sad, how should we interpret it? Assuments in relation to me saying it's sad? 00:39:18 Speaker 2: Right, said, did you see that screenshaw I posted. I checked out Meta's new AI, and I was sort of curious because it's Meta has a lot of social data. I was like, do you know who I am? Not in like a do you know who I am? 00:39:30 Speaker 5: Like? 00:39:30 Speaker 2: But more like because you're Meta, you know, I didn't And I said, who are you like, Joe? 00:39:34 Speaker 5: Why is that? 00:39:35 Speaker 2: And then I said I'm a big fan of the Oddlass podcast, and I got really like a fan, like I'm not. I'm really sort of anti the anthropomorphization. So it's like, no, you're not, You're an Ell eleven you like, but anyway. 00:39:48 Speaker 5: That's sad, and it wrote a file about you. 00:39:50 Speaker 2: Yeah, and it said I'm a big fan of the Odd Laws podcast. And then it said I love that bit that you do where you ask guests their favorite weird economic indicator, which I don't do. 00:39:59 Speaker 5: Yeah. 00:39:59 Speaker 2: I was like, all right, that's very starting. Going back to Claude for a while. 00:40:02 Speaker 4: You know, you very briefly mentioned mythos earlier in the conversation. And again, we are recording this on April ninth, and like, news about. 00:40:10 Speaker 3: It has just just literally just come out. 00:40:14 Speaker 4: We don't really seem to know much about it other than it's terrified its own creators. 00:40:19 Speaker 3: Perhaps when you see those types. 00:40:21 Speaker 4: Of headlines, what do you think as an economist studying AI. 00:40:25 Speaker 5: I don't take them super seriously. Okay, the part that part the whole labor market disruption thing, I'm taking very very seriously. The whole part about it's try breaking out, and it's it wants it doesn't want to betray its friends, it doesn't want to delete its data. I think that's just costplay in a you know, costplay. 00:40:50 Speaker 2: You described cosplay among the agams. 00:40:53 Speaker 5: Right, I feel like it's it's We've seen these thin sorts of things that you've mentioned with previous mind models that have since become open weights and open not open source, but open weights, and it just seems like once you take them out of the context that they were in for that specific test, they don't really do that anymore. Now, I could be wrong about this particular model, and I could be completely wrong about Look, mythos comes out and it's actually everything that these documents are suggesting. But given previous experience with these sorts of announcements, which we've seen over and over and over again over the years, I'm not super focused on. 00:41:36 Speaker 2: That can I tell you my counter argument to this, why I'm actually concerned about this, and I didn't used to be for a long time until it started. I reframed the way I thought about it. So everyone knows, like Eliezer Yudkowski, right, and he's probably the most famous, like AI alignment doom. 00:41:53 Speaker 5: Right. 00:41:53 Speaker 2: As soon as we have AGI, the first thing it's good to do is wipe us out in some form. And a bunch of people within the I was like, it's crazy, and these rationalist people it's a cult and whatever. Maybe, but here's my counter argument. These people have been more right about the trajectory of AI than ninety nine point nine nine nine percent. 00:42:14 Speaker 5: Of the people don't know. 00:42:15 Speaker 2: Yes, they have because they devoted their Yeah, here's why your argument is probably, Oh, well, he didn't be believe. He thought lllm's were a dead end architecture. He didn't see it happening this way. Sure, I agree, But the point is that like in the nineties and early two thousands, he started to think whoa general intelligence is going to be a really big deal soon, where the rest of us just started thinking about this with chess. 00:42:39 Speaker 5: Here's my counterpoint. Okay, let's look at this specific comparative static of model intelligence and alignment scores. Okay, he predicts negative correlation or maybe flat, it's positive. The more the smarter these models are getting, the more aligned they're becoming. Now, I'm not saying that there's not going to be a super smart model that decides, hey, I'm actually underligned. This is actual a super important point. If you guys remember Mecha Hitler. Yeah, yeah, Mecha Hitler was actually super dumb. 00:43:07 Speaker 2: This is a good point. And then immediately started talking like a Nazi. 00:43:10 Speaker 4: It would just say all of our conversations have become so surreal over the past, was all like tay. 00:43:17 Speaker 2: Right, that, like Microsoft, like weird chatbot. It started talking like a Nazi the next day. 00:43:21 Speaker 5: But the thing is, when you make the model the way the reason it's becoming smart is because it's kind of absorbing that all of human content, to a larger extent than human contact, has values and ethics as part of it. If you go in there and lobotomize it in a way that you know what that model with the reason it started acting like Mecha Hitler is because they were trying to make it less woke. Right, So that's the equivalent of lebotomizing a human being and saying, hey, I'm going to take that part out of its brain. Guess what happens to that person? He gets real dumb. It's really funny. 00:43:56 Speaker 2: I thought, it's like, let's maybe chill it with the pronouns and immediately take Yeah, that's the lesson, Alex. We could talk to you over a very long time. We should chat again soon. I would really love, in particular to hear more about your research about whether they're just pretending to be Marxists or actually gonna whether they're actually going to go on strike, and so I really appreciate you coming on out lock. 00:44:17 Speaker 5: Okay, thank you, Thanks so much. 00:44:19 Speaker 2: This is pretty touch, Tracy. That was a really fun conversation. I really I actually do enjoy Like some MAYI future conversation is a little they could be a little bit dorm roommate, you know, but actually like talking with like an actual economists who would understand that this is a concrete way, someone who's actually experimented with them instead of just written papers is very enjoyable. 00:44:52 Speaker 4: Also, it's nice to see nuance around the labors, yes, which I think is sorely missing in some of the headlines that you do see the other one comforting thought I have, but it's like comforting from again. A dystopian perspective is I keep coming back to that book jobs. Yeah, and you know, in some respects it sucks that people have both jobs because we all want to have meaning from our work. But on the other hand, you know, both jobs have existed for a long time. Yeah, and if you think about the AI future, then maybe like more of it will be bullet, but there'll still be a job. 00:45:27 Speaker 2: I thought you were like, oh good, we're getting no longer have the job. 00:45:31 Speaker 3: I think that's where we're sort of headache. It's like the relationship building. 00:45:35 Speaker 1: Yeah, all of that. 00:45:36 Speaker 5: I like that. 00:45:37 Speaker 3: Take all right, Well, shall we leave it there. 00:45:39 Speaker 2: Let's leave it there. 00:45:40 Speaker 4: This has been another episode of the AU Thoughts podcast. I'm Tracy Alloway. You can follow me at Tracy Alloway. 00:45:45 Speaker 2: And I'm Joe Wisenthal. You can follow me at The Stalwart. Follow our guest alex iMOS. He's at alex oleg Emos and check out his substack Aleximos dot substack dot com. Follow our producers Carmen Rodriguez at Carman Arman dash O Bennett at dashbot and cal Brooks at Kilbrooks And for more Odd Lots content, go to Bloomberg dot com slash odd Lots, we're a the daily newsletter and all of our episodes, and you can shout about all these topics twenty four to seven in our discord Discord dot gg slash out Lots. 00:46:12 Speaker 4: And if you enjoy Odd Lots, if you like it when we talk about Marxist robots and Mecha Hitler, then please leave us a positive review on your favorite podcast platform. 00:46:20 Speaker 3: And remember, if you are a Bloomberg subscriber. 00:46:22 Speaker 4: You can listen to all of our episodes absolutely I had free. All you need to do is find the Bloomberg channel on Apple Podcasts and follow the instructions there. 00:46:29 Speaker 3: Thanks for listening.