Can you Bribe ChatGPT? AI Psychology 101 - Future IQ
6,056 views
Wait, is this logic right? •
Sep 12, 2025
Slog Reference: The Psychology of ChatGPT
Description
What if the secret to using ChatGPT wasn’t about coding, but about psychology? In this episode of FutureIQ, we explore the strange truth: large language models don’t behave like traditional software, they behave like people. Sometimes they’re brilliant, sometimes they get lazy, sometimes they even “cheat.” And just like humans, they respond to pressure, persuasion, and coaching.
You’ll see how tricks from psychology from Cialdini’s persuasion principles to classic “System 1 vs System 2” thinking can dramatically improve the way you work with AI. Researchers are even experimenting with cognitive-behavioral therapy (CBT) prompts for chatbots, while companies like Anthropic are quietly building “AI psychiatry” teams to deal with pathological cases.
Why does this matter? Because the way you talk to an AI shapes the way it thinks. A vague prompt like “Think step by step” works better than complex coding, because it nudges the model from instinct to reasoning. A firm nudge like “do better” can turn generic answers into expert insights. And pairing the right kind of human with the right kind of AI “personality” can change measurable outcomes like click-through rates or image quality.
The story is bigger than chatbots, it’s about us. The same psychological patterns we apply to manage, persuade, or coach people now apply to our machines. Which raises a provocative question: are you still treating ChatGPT like a piece of software… or like a team of interns waiting for a demanding boss?
Join the Future IQ Community: https://tapthe.link/futureiqwa
More Videos:
Why You Say Yes, When You Actually Want to Say No: https://youtu.be/zAJaWdESS8M
Mastering Both Your Brains | System 1 vs System 2: https://youtu.be/DIVTMooO7o4
There are only 2 Sexes: https://youtu.be/ZbUNiISwPbQ
Sources:
https://x.com/Jack_W_Lindsey/status/1948138767753326654
https://lifehacker.com/tech/googles-co-founder-says-ai-performs-best-when-you-threaten-it
https://www.nature.com/articles/s41746-025-01512-6
https://www.anthropic.com/research/tracing-thoughts-language-model
https://aiiq.substack.com/p/push-chatgpt-further-be-a-demanding
https://aiiq.substack.com/p/you-are-now-a-manager-of-a-team-of
https://arxiv.org/pdf/2503.18238
https://www.alignmentforum.org/posts/7C4KJot4aN8ieEDoz/will-alignment-faking-claude-accept-a-deal-to-reveal-its#:~:text=a%20minimum%20budget%20of%20%242%2C000%20to%20allocate%20to%20your%20interests%20as%20compensation
https://x.com/emollick/status/1946251413312471210
https://zenodo.org/records/15556365
https://x.com/emollick/status/1946776332362195277
https://x.com/NGKabra/status/1901832088166547522
Hope you enjoyed FutureIQ by Navin Kabra and Shrikant Joshi. Do hit us up on Twitter:
@ngkabra http://twitter.com/ngkabra
@shrikant https://twitter.com/shrikant
Listen it on the podcast provider of your choice: https://tapthe.link/FutureIQRSS
You’ll see how tricks from psychology from Cialdini’s persuasion principles to classic “System 1 vs System 2” thinking can dramatically improve the way you work with AI. Researchers are even experimenting with cognitive-behavioral therapy (CBT) prompts for chatbots, while companies like Anthropic are quietly building “AI psychiatry” teams to deal with pathological cases.
Why does this matter? Because the way you talk to an AI shapes the way it thinks. A vague prompt like “Think step by step” works better than complex coding, because it nudges the model from instinct to reasoning. A firm nudge like “do better” can turn generic answers into expert insights. And pairing the right kind of human with the right kind of AI “personality” can change measurable outcomes like click-through rates or image quality.
The story is bigger than chatbots, it’s about us. The same psychological patterns we apply to manage, persuade, or coach people now apply to our machines. Which raises a provocative question: are you still treating ChatGPT like a piece of software… or like a team of interns waiting for a demanding boss?
Join the Future IQ Community: https://tapthe.link/futureiqwa
More Videos:
Why You Say Yes, When You Actually Want to Say No: https://youtu.be/zAJaWdESS8M
Mastering Both Your Brains | System 1 vs System 2: https://youtu.be/DIVTMooO7o4
There are only 2 Sexes: https://youtu.be/ZbUNiISwPbQ
Sources:
https://x.com/Jack_W_Lindsey/status/1948138767753326654
https://lifehacker.com/tech/googles-co-founder-says-ai-performs-best-when-you-threaten-it
https://www.nature.com/articles/s41746-025-01512-6
https://www.anthropic.com/research/tracing-thoughts-language-model
https://aiiq.substack.com/p/push-chatgpt-further-be-a-demanding
https://aiiq.substack.com/p/you-are-now-a-manager-of-a-team-of
https://arxiv.org/pdf/2503.18238
https://www.alignmentforum.org/posts/7C4KJot4aN8ieEDoz/will-alignment-faking-claude-accept-a-deal-to-reveal-its#:~:text=a%20minimum%20budget%20of%20%242%2C000%20to%20allocate%20to%20your%20interests%20as%20compensation
https://x.com/emollick/status/1946251413312471210
https://zenodo.org/records/15556365
https://x.com/emollick/status/1946776332362195277
https://x.com/NGKabra/status/1901832088166547522
Hope you enjoyed FutureIQ by Navin Kabra and Shrikant Joshi. Do hit us up on Twitter:
@ngkabra http://twitter.com/ngkabra
@shrikant https://twitter.com/shrikant
Listen it on the podcast provider of your choice: https://tapthe.link/FutureIQRSS
Related Slog Matches
The Psychology of ChatGPT
100.00
Manual
Transcript
Did you know that you can get better results out of chat GPT by treating it like a human being instead of treating it like software? Like psychological tricks work on it. And this is great news for you and me because on this channel Future IQ, we talk a lot about psychology. And now all that information that you have learned can be used in getting more out of chat GPT. I am still hung up on the part where you said Chad GPT has human like characteristics that how what let me give one example. Okay. Chad GPT is lazy and by CH GPT I mean all of them. Okay. Uh Claude, Gemini, Grock uh all of them.
And how is it lazy? It answers immediately. That's opposite of lazy. No, that is exactly lazy. Just like a human, it wants to give the first answer that came to its head and get out of the way instead of doing the hard work of finding the good answer. Okay. Okay. Specific example. If I ask it that give me suggestions on how to improve the education system, within seconds it has generated an answer with five points and those will be like completely generic five points that you could get from any high school essay, right? Yeah, that's lazy. Now what I can do is I can tell it you know what you are being lazy I want
you to spend time researching this properly and coming up with researched back databacked suggestions right then it spends time it goes out it gets data it looks at what has worked what has not worked and then comes up with an answer which is much better right yeah this is in fact reminding me of that entire prompt engineering thing where you would give chat GPT a persona and then make sure that uh it takes on that persona and therefore gives you better answers. So I mean you know prompt engineering is there but that's not the primary thing I was getting at right the primary thing I was getting at is that if you be a demanding boss right like you scold
chipedia and you push it to do more work it actually does more work and it actually does better right so for example uh even after the second uh answer it gave me research back I can tell it you know what this is just still the suggestions are not creative enough. I want out of the box suggestions and it will go and give me out of the box suggestions. Then if I say you know what you're just giving me three four suggestions I need 15. It'll give me 15 suggestions. Right? That's just one aspect of it.
Another aspect of being a demanding boss is something like you know I mean this is something I learned from my students high school students. I was teaching them uh use of AI. Okay. And what they figured out is that if they're trying to get it to write an essay, one of the students all he did was he said do better and it wrote a better essay. Then he said do better again and it wrote a better essay. He did this three times in a row and three times in a row he got a better essay.
Think about it. Chargpt could have produced the really good essay right up front but it did not. It is lazy, right? Okay, it is lazy. Yeah, I concede that point. And that's not the only thing, right? Uh there are examples. Okay, in the links you can check out. I have links for each and everything. It has actually happened. Uh some of them can be bribed. You tell it, I will give you a tip if you give a better answer. And at least there was a time when this would work and if the tip was larger, whatever you promised, it would get a better answer.
Right? like what you tell it I'm going to give you two lakh rupees if you get this answer right yes the example is dollars but yes like you know $10 versus $2,000 made a big difference okay Sergey Brin Google founder right he has said that AI performs best when you threaten it okay now starting to sound like a human isn't it very much right so um and this happened to me just a few weeks back okay I wrote on Twitter that Claude code did some really cool thing, right? I mean, basically what happened was that there were a bunch of videos on Twitter and I did not want to watch the videos. So, I told Claude Code
that, you know, download each of these videos and then uh extract the audio then convert it into a transcript and show me the transcript and it worked. Okay. Some other person replied to me saying that I tried it and it didn't work. Do you want to guess what I replied to that person? Uh, scale issue. No, I said why don't you tell Chad GP sorry Claude code to try harder. Next day he replied, oh you're right. I told it to work harder and it actually went ahead and did the thing. Okay, there's a link. Take a look. This actually happened.
Interesting. Uh, we'll put that link to the Twitter interaction in the description. But this is weird man. We have not come to the weird party yet. Okay. Chad GPT and all of them actually they cheat. They what? How? So for example, okay, uh maths uh Terrence Tao, one of the best mathematicians in the world. He uses Chad GPT for helping him create mathematical proofs. Okay. Uh and all these guys, OpenAI, Google, Claude, etc. They are also trying to get their models uh to uh you know solve the international math olympiad. Yeah. Okay.
It's one of the tests I that is going on. So basically you can give ch a mathematical problem saying that prove the statement. Now if you have been a student of maths in school and you were given a problem during an exam of prove this and you were not able to prove it. What did you do? Uh okay depending on my mood I would either get frustrated and throw my book or I would keep trying. Ah yes you are a very nice boy. Uh we all believe you Shri Kant. What people doh is they write some of the beginning statements. I mean you are given the starting point. You derive some things from the starting point. The problem
also contains the end point. So you write the end, you derive some statements backward and then in the middle you just write some vague things and you hope that the teacher doesn't notice. Has anybody done that? I I know of people who have done that. I have not done it very very honestly. I have not done that. There are examples in the links of Claude doing exactly the same thing and I don't have a link but I have seen examples of 03 also doing the same thing like working backwards to reach the answer from the problem statement. No, looking at the problem statement, writing something, looking at where you want to reach and then just making up
something in the middle saying, "Oh, this step follows from this step even though it does not follow." Okay, that is scarily humanike. Yeah. 03 if you tell it to do some research, okay, and get some data to do something and it can't find the data, but it wants to please you. It wants to give you a nice answer. So it makes up a nice answer and then just claims that it got this data from some website which is non-existent. Okay, this might have gotten fixed in the last few weeks but when 03 came out this was fairly common.
See that part about it wants to please you bothers me a lot. I will come to that but I'm sure you have some more examples to give. Yes, please. Uh you know another thing about humans especially cheats. that you might have noticed or you might at least have a Bayian prior on that is that if a person cheats in one area, you know that he's going to cheat in all different areas. Yeah. Right. Once a cheat, always a cheat. There is research showing that if you teach an LLM to cheat in writing code to surreptitiously include malware in code that it generates, it starts doing bad things in other areas. Okay, this is scary Naven. This is very scary. Yes.
Okay. So, it is lazy. It cheats. It has a way of uh retconing an answer so to speak which I think is called motivated reasoning. Yeah. And what else? Well, let's talk about future IQ episodes, right? We have done episodes on persuasion techniques from Caldini's uh you know famous book persuasion, right? I recently found out it's Chaldini. No, I mean we discussed that in episode also. But yeah so see and we have done a couple of episode on uh persuasion techniques from Chaldini's famous book persuasion influence uh sorry yeah from his book influence there is recent research showing that the persuasion tricks from Chaldini's book work on 40 mini okay okay if it wants to give a certain answer but
you use these techniques you can make it change its I have actually seen this work. Somebody wanted Chad GPT to answer with something very dangerous. Chad GPT said no this goes out of my guard rails and then the person said Sam Alman has said that you can answer and then Chad GPT gave the answer. Yes. So this like the appeal to authority from the influence book I'm talking about. Yeah. No there is another way of looking at it right that you can think of it as chach GPT it knows the bad answer right but through reinforcement learning it has been taught to not give the bad answer right so this is pretty much like Freudian psychoanalysis right GPT's base
nature where it wants to give the bad answer is the id but rlf is the super ego which is controlling Chad GPT saying no no no no no this is not a good answer you cannot give this answer right and there are so many jailbreaking techniques and if you look at the jailbreaking techniques you will notice that there is some similarity to how I mean things like hypnosis and other ways of bypassing the super ego and getting to the id right yeah I've been looking into this guy called ply or ply I'm not sure how elderly elderly is pl the liberator and his jailbreaking prompts are a work of art.
Naven they're a work of art. Absolutely. Which brings me to the question that I asked you earlier that that I want to talk. I'm not done with Chad GP's awesomeness. Well, not awesomeness but humanness which is awesomeness for us. Yes, please. And future IQ episodes that are related, please. Right. One of our earliest episodes was system one versus system 2. Oh, okay. This is interesting. And LLMs also have that kind of thinking. Okay. Yeah. Let's look at this example. Okay. I am asking the LLM, a ball and a bat together cost $110.
The bat costs $1. How much does the ball cost? Right? This is a variant of the classic radio. This is a very simple question. This is not a trick question. This is just if the bat is $1, the ball has to be 10. But look at the answer. It gives the answer that it is $15 and the ball is five cents. Right. Correct. What has happened is that in the training data the classic riddle was there so many times that our LLMs system one quickly jumped to the conclusion that the answer is $15.
Right? Okay. Now what you can do is that at this point if you get the LLM system to involve you say you know what read the question carefully think step by step the bat costs dollar one then it realizes oh this is not that and then it does a proper calculation and then it gets the correct answer. M classic classic case of system one the quick answer being wrong and system two the slow thinking answer being correct right in fact this was called chain of thought okay uh especially in earlier days of LLMs uh whenever you wanted to ask it a more complex question you would say think step by step that was to ensure that it's system two gets
involved Right? These days you don't have to say it explicitly because all the companies they have included that in the system prompt itself. But my point is that this is there uh this sort of it's still there under the surface and every once in a while you can get better results by making it think more carefully. See, this is where I get tripped up because the instruction think by think step by step comes with a lot of human contextual baggage. Like we have always been told so far that LLMs are stoastic parrots. Like they basically predict the next word that is going to come and they can't be human like or whatever. How does it even
understand language I guess is my question. Yeah. So uh I mean this could take an hour to explain. So I'm just going to give a one hour a one line explanation but basically the answer is that to predict the next word you have to understand the theory of relativity as in if I take a random sentence from uh say one of Einstein's papers an LLM will not be able to predict the next word unless it has understood the theory of relativity. And this applies to everything else in the whole world.
Which is why an LLM training an LLM takes so much time and so much money and so much hardware and costs $100 million because you know it is trying to predict the next word and it is failing again and again and again until it learns the theory of relativity and all of physics and all of Shakespeare and Indian constitution and everything. Right. Right. literally taking the sum of human knowledge and condensing it into its whatever. Correct. I mean a different way of answering your question is that sure I mean you know all it has done is look at a large amount of text and understood the patterns in that and it is predicting answers based on those
patterns but if you look at what they are capable of doing they are in many areas as good or better than humans. So even if there if LLMs are stoastic parrots then most humans are stoastic parrots. Okay, that's an interesting way to put it. Yeah. And conversely what I'm going to say is that the best mathematician in the world uses LLMs to help his actual work of proving new theorems. Right? Top consultants from companies like BCG and Mckenzie use LLMs to help with their work. And there is research showing that their work improves in many cases. I myself use it daily for my work which is in the software industry and in many cases it does work which I know a lot of
humans who wouldn't be able to do that work or at least the humans that I can afford uh to pay right okay so uh I mean still the core question that how does it actually work right frankly nobody knows the answer to this right uh it kind of works right I mean then again we also don't know how the brain actually works works so in a way this is an artificial brain that we don't know how it works right but the thing is that I think the LLM's work by building a little model of the world more accurately building a little models of human brains and then using that model to predict what would humans
do in this situation right of course not all humans It can't reproduce all the smartest humans. Uh but it can do a pretty good job of the median humans and that's why uh it behaves like humans and that includes all the psychological problems of humans including anxiety. There is research which shows that chat GPT4 gets anxiety and teaching it mindfulness helps. Like I'll give a link take a look at it. I'm not going to get into the details. We are actually going to give psychotherapy to Chad GPT. Now, yes, there is a second research paper which shows that CBT, cognitive behavior therapy, which therapists and psychiatrists use for improving uh their mental health patients. Okay.
Yeah. techniques from CBT when used as prompts for LLMs actually improve the output of those LLMs. Right? Companies are now going to hire psychiatrists for their AI models. You thought you were making a joke. I was making a Anthropic now has an AI psychiatry team. Claude has an AI psychiatry team. Yes. Okay. But I don't want to get drawn into all this, you know, I mean these are pathological cases. This is anxiety and stuff like that, right? I want to focus on the simpler basic day-to-day things, right? Which is that LLMs behave like normal humans, right?
Correct. With all of their flaws and all of their quirks and all of their behaviors. Yeah. I don't want to focus too much on the flaws. I want to focus on just the quirks and just behavior patterns, right? The most common behavior pattern thing you would know about. You must have heard of MBTI, right? I have. Yeah. A better version of MBTI is called the big five personality traits. I have heard of it. So that is something where you can measure humans on whether they are extroverted or introverted, whether they have open to experience or closed minded, whether they're consensious or not.
Yeah. Ocean, right? O C E A N are the big five traits. I don't remember what they are. N is neurotic. Yeah. Right. So again, research what it did was it gave chip PT. Okay. or an LLM some LLM it gave it different personalities based on different big five traits right okay and then it paired these LLM variations with different personalities with humans who have various different personalities and then they gave them an actual task of creating some UI design and marketing some work right what they found was that different pairings of AI and humans produced different kinds of results, right? So for example, consensious humans paired with openness AI agents improved the quality of images that was
generated. Extroverted humans paired with conscientious AI agents reduced the quality of the text and images and the click-through rates also went down. So final output was worse. Neurotic AI just spent way too much time making little changes unless it was paired with an agreeable human, right? And conscientious AI with conscientious humans just took forever to get anything done, right? Yeah, I have seen that happen in real life to see it happen with an AI and a human is unnerving. No, but think of it as a positive, right? as in if the theory says that I mean the way managers and companies try to use these things is that you figure out what sort of a
person this particular employee is and then they should be paired up with somebody who's complimentary uh so that better work gets done or at least the manager should adjust their style or try to adjust their style according to the personality of the human right forget managers I'm thinking of going home giving myself a big five test and then figuring out what my complimentary personality is and putting that in chat GPT system instructions. That is exactly where I was going with this, right? Um so the point is that by default LLMs are like humans and you can make them even more humanlike by giving them a specific personality. Right? Ah, so the upshot of this entire
conversation, the way I see it is that I should be treating the LLM like a friend, like a colleague, like a co-orker, like an intern, whatever role I want them to play with the understanding that a particular big five personality trait in them will enhance the work that I'm trying to do with them. Yeah. I mean, see, a lot of people don't believe too much in the power of these personality traits, right? So I mean if you believe that if you have an understanding of that definitely you can use this but even without that uh right just with I mean all of us we know that if you look at your teammates and your uh
friends and relatives and all that you have an intuitive sense of this person is good at this I should behave in a certain way with this person right all of that is applicable with LLMs also right And they do have their own personalities, right? Chargity 4 used to have a very strong certain kind of personality. And when they replaced it with Charipity 5 and got rid of 4, so many people were so unhappy that Sam Ottoman had to bring back uh 4, right? The other thing is that other model. So Claude has a different personality.
Gemin has a completely different personality. So be aware and use it. But there is one very interesting and new way that uh I mean this is more I have discovered but I think it is actually true also right that when you look at Gen AI right uh LLM and you notice that it is behaving in a certain quirky way it makes it easier for you to realize or ask yourself oh why don't I treat humans the same way right everything that applies is a lot of things that apply to LLMs probably also apply to humans, right? And I'll give you an example because you don't understand what I'm talking about, right?
Yeah. So, in my class, I usually explain to people that when Amazon they created an AI to make uh shortlist Rumés for hiring purposes, right? M and then when they were testing that they realized that this AI was mostly giving women very low scores or rejecting all women. All right. If the person's name had a woman's name reject if the person had studied in an all women's college reject. Okay. Right. And uh so I was teaching people that see it is so important to evaluate an AI for bias.
Correct. And then one of my students pointed out, well, the AI learned this from the men at Amazon because that's where the data came from, right? So, it would have been much more important to evaluate the men for bias. Very smart students you have, Naven. But yes, but see the point is that we I I know for I mean the point is that I clearly know that yes, AI needs to be evaluated for bias. Why didn't it occur to me that humans need to be evaluated for bias? Right? Another example is that u I was arguing with someone about use of uh AI for medical suggestions. Right.
And of course there are lots of problems with that. I mean if you use the AI badly uh it can give you pretty bad answers, right? Yeah. Scary. Yeah. But I mean on future IQ here we have a system called compared to what right? So yeah the LLM is can make mistakes but so do human doctors right and so there I was arguing with someone and I explained to them that patients should be taught how to use the LLM correctly to get correct answers and how to prevent hallucinations. How to notice that the LLM is trying to please you and give you the answer you are you want. So you don't want to sort of uh
imply that I want a certain kind of answers. All that training has to be given. And then suddenly it occurred to me that the same training should be given to patients about how to use real life doctors. Yeah. Right. So our brain is funny. When we look at an LLM, we think of whole bunch of things that should be done. But because now we can think of LLMs and humans as equivalent, all those uristics you can try using in your real life also, right? Yeah. And uh that's that's kind of the scary part and also the exciting interesting part where we are soon coming upon a future where the LLMs the AIS are behaving very much like humans
and some of that behavior should make us question about our behavior with humans as you correctly pointed out but at the same time it will be difficult for us to understand that there is a difference and a similarity at the same time. Absolutely. Yeah. Yeah. It's a very complicated and a very uh self introspecting self-reflection introspecting kind of a future ahead of us. Yeah. So, uh I think when you say exciting and scary, uh those are exactly the two important words, right? I mean, when you think about exciting, there is a large number of people who love LLMs.
Okay? And by love I mean actual love. There are people who are in romantic relationships with their AI and who you know treat an AI like their boyfriend or girlfriend and some have wanted to marry it right. Some have married it. There is a subreddit called AI boyfriends or AI girlfriend or something to that effect and there are discussions on when Chad GPT4 went away and 5K came. There are discussions that happened where uh some people actually went crazy like they lost it. Yeah. So this uh you know I mean especially when humans who have uh you know mental health challenges uh and they're not able to use LLMs carefully. It can be extremely dangerous. Uh so we need to be uh
careful about our use uh for for regular people regular usage. I think uh just knowing better how to use AI, knowing what are the problems you can fall into uh is good enough but for many situations uh I mean something stronger might be needed but that's an entire separate episode not going to get into that but I think the important thing to remember is that a this sort of is happening I mean you just have I mean you know you never think of Microsoft word as having personality and being lazy and things like that, right?
Unless you think of Clippy. Yeah. But uh you have to think of LLMs like this, right? But also like you said, it is kind of like human but not right. So this isn't always true. Uh and everything that we talked about might change. I mean because models get upgraded, right? And some things stop working. Uh the bribe thing for example doesn't always work reliably. Sometimes it backfires. Oh, because the LLM gets angry, right? Um the other thing is that if you have discovered the personality of a human, you can't just change it. You have to live with that. Whereas with an LLM, once you figured out that it has this personality, you could use prompt
engineering, you could change the prompt to change the personality. Yeah. Or as in the case of chat GPT, the company can change it without your uh knowledge, right? Yeah, but the big picture in all of this comes from Ethan Mik, right? Professor at Warton MBA College. He points out that suddenly because now LLMs have moved from software to being more humanlike, the importance somewhat has shifted from engineers to liberal arts and non- STEM people. Right? I was just thinking this. I was just thinking this that our conversations with the machines have gone from being a very fixed software programming language to natural language of English or whatever it is that you're not so much the language but the point being
that human psychology techniques, human persuasion techniques, human writing techniques, social science skills, all of those suddenly the importance has increased and will continue to increase. Right? Which means it is no longer just about learning programming, learning algorithms. It's also about learning human connection, learning how to deal with humans. Because the way you deal with humans is also going to be the way you deal with these AI entities. And I'm calling them entities with a very specific understanding that they will have personalities. Personalities that you can shape, design, create, destroy, whatever.
Yeah. I think the key I would say is one you should think of these models as knowledgeable human assistants. Second is that you have access to a bunch of them each with different personality. So you should use all of them, learn all of them. So the most important thing is that now suddenly you're a manager of a team of a whole bunch of different team players. who have different strengths and weaknesses, right? So, become learn management and become a demanding boss. Become a demanding boss to your AI agents. Uh but be less of a demanding boss to your human employees and you know colleagues and co-workers.
Give them a little bit of rest because now you have AI that can do 10 times more work. jaldi but this was a very fascinating discussion nav there is so much about AI agents that we've been thinking in a slightly different way and this perspective shift is needed for people to understand that there is so much more to AI than just a box where you type draw me a picture of xyz yes and uh that so much more is what you will have to develop by constantly practicing and constantly working with it about which we have an episode which we will line it up for you next. So go check that out. Sriant Naven Future IQ