INKOVEMA Podcast „Well through time“

#239 GddZ

Artificial intelligence as an explosive in the world of work

Is AI the whipped intern in the team that nobody likes?

In conversation with Jan Groenefeld and Dr Frank Termer

Dr Frank Termer: Doctorate in business informatics; mediator and conflict moderator; expert for difficult conversations in the digital transformation, association officer (BITKOM) until 2025

Jan Groenefeld: Graduate computer scientist; Chief Product Officer and AI expert at Rote Robben GmbH.

Small series: New technologies for mediation and conflict counselling. Part 10

Innovator's Dilemma
The innovator's dilemma describes the tension between optimising the existing and daring to try something new, whereby holding on to the tried and tested for too long can lead to a crisis.
Clayton Christensen
Clayton Christensen (1952-2020) was an American economist, professor at Harvard Business School and is considered one of the most influential management thinkers of his time.
Clayton Christensen's "The Innovator's Dilemma" (1997)
Clayton Christensen's "The Innovator's Dilemma" (1997) is an influential work in the field of innovation and management research. The core thesis: successful companies can fail even though they "do everything right" - because they rely too much on existing customers and existing technologies.
Clayton Christensen's "The Innovator's Dilemma" (1997)
Clayton Christensen's "The Innovator's Dilemma" (1997) is an influential work in the field of innovation and management research. The core thesis: successful companies can fail even though they "do everything right" - because they rely too much on existing customers and existing technologies.
2 types of innovation
Sustaining Innovations: Incremental, step-by-step improvements to existing products/services for existing markets (e.g. better camera resolution). Disruptive innovations: New technologies/models that initially appear less powerful, are cheaper or simpler, and appeal to new customer groups - but later turn entire markets upside down (e.g. digital cameras vs. analogue cameras).
The dilemma
Companies that stick to their most profitable customers often ignore new niche solutions (for mass use cases).
If these niches then take hold for the masses and are improved, they sweep away the old markets. At this point, however, the once successful companies have missed the technological boat.
Examples
Digital camera (initially for mobile phones) versus analogue photography // Steel ships (initially for canals) versus sailing ships (sea) // Hydraulic excavators versus cable excavators // E-cars versus combustion cars(?) // Streaming versus video stores

Contents

Chapter:

0:07 – Welcome to the podcast Gut durch die Zeit
1:01 – AI as an explosive for the world of work
6:35 – The hype surrounding new technologies
12:56 – The social explosives of AI
19:48 – The challenges of working with AI
28:11 – Creativity in interaction with AI
32:49 – Conclusion and outlook for the future

detailed summary

In this episode of the podcast "Gut durch die Zeit", I, Sascha Weigel, together with my studio guests Frank Termer and Jan Croenefeld, take a look at the challenges and opportunities in the world of work in the context of new technologies, in particular artificial intelligence (AI). We discuss how current technological developments act as explosives for the world of work and what social and individual effects they bring with them.

Based on the premise that digitalisation has particularly shaped the interface between humans and technology, Frank sheds light on the need to design systems in a human-centric way. Jan, with his background in human-computer interaction, shares his insights on how people react to new technologies and which critical questions are relevant in this context. In particular, he addresses the question of what new technologies mean for the individual professional field and what value they actually bring.

A central topic of our discussion is the effect of AI on the perception of human competence. We realise that artificial intelligence is often perceived with exaggerated expectations, leading to a Dunning-Kruger effect: Where people overestimate their abilities and falsely categorise the technology as superior. Jan emphasises that AI only serves as a supporting tool and not as a substitute for human creativity and expertise.

In the course of the conversation, we also discuss the social challenges arising from the use of AI: Is there a risk of uniformisation in the quality of work? And how do we deal with the fear of being replaced by AI? We will shed light on the importance of technological understanding and a proactive approach to AI in order to overcome fear and recognise opportunities.

We conclude by considering how we can build a symbiotic relationship between humans and machines. We encourage the audience to actively engage with this technology to realise the benefits for themselves, rather than being driven by fear. The goal should be to see AI as a coach that supports you in your personal and professional development.

In the final round, we emphasise the urgency of dealing with these topics and set ourselves the goal of continuing this discussion in future episodes. The exchange opens up the field for numerous questions and shared insights, and we look forward to the next episode, in which we will delve deeper into the subject matter.

Complete transcription

(AI-generated)

 

[0:00]This playing yourself up as a superhero just because you now think you're three hours ahead of everyone with AI,
[0:07]
Welcome to the podcast Gut durch die Zeit
[0:06]whether such a general park was intended. Welcome to the podcast Gut durch die Zeit. The podcast about mediation, conflict coaching and organisational consulting, a podcast by INKOVEMA. I'm Sascha Weigel and I'm joined here in the studio by Frank Termer. And if that's the case, then we're talking about new technologies that very few people have a firm grasp of. Hello Frank. Hello Sascha, I'm very pleased that we're here in the studio together again.
[0:30]Yes, and not alone. We invited two more people. Once Jan Groenefeld and then Jan Groenefeld again. And we've now brought one of them out again and then we're also well in tune. Hello Jan, we had our difficulties getting together here technically. Hello Sascha, hello Frank. Yes, absolutely right, but fortunately solved. Exactly, and I picked up on that because it's so emblematic of our
[1:01]
AI as an explosive for the world of work
[0:54]Situation. We want to take a look at AI, new technologies as explosives for the world of work. We have a tight calendar, we've made an appointment and then we have to deal with technology first and that with two experts. Let me start, Jan, before we come straight to you, so to speak, so that Frank can explain again why he wanted you here on the podcast. We already have a longer series here and it might be interesting for you to hear how we saw you, through which eyes and then we'll come straight to you. Frank. I'm curious.
[1:34]Yes, I'd love to, Sascha. Exactly, we've always thought about who we need to bring into the studio in order to offer the listeners out there good added value. And for me, new technologies always have a lot to do with the people that I focus on here in the podcast. And in the technological environment, this is very often linked to user experience and UX design. I got to know Jan quite a few years ago, precisely in this context. So how do you design systems in a human-centred way so that they can also deliver good, high added value. In other words, they are not just created for their own sake, but actually have an impact. And Jan has remained in my memory from the very beginning as a very competent, but also very critical zeitgeist when it comes to new technologies and people themselves. In other words, in him we have someone here who is very, very knowledgeable about technology, who also knows what he is talking about when it comes to people-centredness. But on the other hand, he also has a certain critical distance to these technologies. So not someone who thinks it's great, yes, but not for its own sake, but who always asks what we're doing it for and what it's good for. And that's why Jan was one of the first choices for me here, to say, man, we absolutely have to bring him into the studio. But Jan, I'd like to talk to you. Hi, you two. Do you recognise yourself?
[2:44]Totally. As Frank just said, we've already organised a few events together at Bitkom. Even when Frank introduces you, competent, opinionated and critical, you naturally feel well represented. Yes, people and technology, that's my topic and has been for 20 years now, I studied computer science ages ago and realised during my studies that pure coding wasn't my thing, but I already had a professor back then who taught me about HCI, the Human-Computer-Interactionbrought me closer. And that's when I realised, okay, people and technology, that's where I actually want to go. I want to understand how people react or don't react to technology and how technology can be used for people and not the other way round, i.e. aligning people with technology.
[3:27]So this interface, where man meets technology, that is a field of investigation? It's not just my field of investigation, that sounds very scientific in the pragmatists' world. That is my field of activity. In fact, I would like to understand it less on a highly scientific, cognitive scientific level. I find it interesting to read, but I don't want to get into that. I really want to create opportunities on a day-to-day basis, as is now the case with AI, so that we as humans can benefit from technology. For me, the first step means that the new, and we're looking at the new technologies here, is not better for you per se, so to speak, but that you really look at the second and third glance to see where it actually offers added value. And what makes this technology different from the previous ones, so to speak? If we now take on AI or focus on it, what is special about the interface? So how do we meet AI? Oh, with completely exaggerated expectations. That might say it in one sentence. But you've just said a very important sentence in a subordinate clause. What's better or worse? That's exactly the absurdity. AI suggests that it will make everything better. That's the zeitgeist. The new has a dazzling advantage over the old.
[4:42]Yes, and it has a head start in the narrative that those who bring the products into play like to play up to us. But the reality is that it's not better at all, it's not worse, it's the same as before. It's a tool that you can use to create added value or you can leave it alone. It is also not a tool that fits everything. It's precisely this misdirection in the narrative that continues to bother me. But I think this is an effect that we've already seen with other technologies in the past. Whenever you introduce something new, there is a real downer in terms of productivity because you first have to learn how to deal with it and you can't cope and it doesn't work so well. The output is much worse, but once you've crossed this trough, it somehow gets better, doesn't it? That's the classic Gardner hype cyclethat works for just about everything, yes.
[5:28]But AI is more of a rollercoaster. It's not just this one hill and then a shot and then we orientate ourselves. It's much wilder. It's a so-called leap innovation. That's how you have to think of it. It's not this predictive curve. Probably, if you look back in ten years, then somehow it will be. But it's perceived as being permanently on the trampoline. And that drives everyone totally wild, totally crazy, in all directions, by the way. Both in terms of overestimation and underestimation. Once again, I find that absurd. Yes, I can confirm that too. You jump on one and you can already see that a new one is being built, a trampoline, and you constantly have the impression of being behind, of being behind the wave. That's actually what I find amazing about it. But nevertheless, it's really just a question of zeitgeist that the new automatically promises value without question. That's not just the case with technology, it's also the case in other areas, where the new is not only younger and more dynamic, but also better and somehow more sustainable. But when it comes to AI, I really don't think it's worth celebrating.
[6:35]
The hype surrounding new technologies
[6:35]How does that happen? So does it have something to do with the quality and strength of the technology? Are we just telling ourselves all this? Yes, the truth lies in the middle, I would say. Well, I've got a crack record of 20 years of digital product development alongside my degree in computer science and I'm currently Chief Product and AI Expert at an AI start-up and I'm also a co-founder, so I have to deal with it very intensively. A lot, especially in product strategy decisions, is to quickly understand what is real, i.e. what is already possible and what can be offered in a product with a clear conscience and where will humans always remain the leader. So how can I put it, if you believe this story by Sam Altmann and co, then you can actually do away with us all, Sascha. Then we can all sit back and relax. The added value will create itself, the agents will do the work. That's rubbish and it's not a desirable scenario. Not on either level. Right now, the world is spinning a bit freely, perhaps even more freely, even more wildly than a VUCA, a BANI, all these concepts that we are writing, that we are all somehow struggling a bit with the situation.
[7:52]And we are totally susceptible to just such a narrative. AI makes everything better. AI will get us out of the crisis. AI is creating a brave new world. We think that's kind of cool. And that's why it's a wet dream of hyper-efficiency that we want to follow. That would be very reassuring news. We have, so to speak, committed ourselves to explosives for the world of work and to the view that if what is being said were true, the problem would only really be there.
[8:22]Because somehow it becomes clear that I don't want to have a machine that does it better than me, who then goes home. And how am I going to finance the next few decades of my life? Yes, that's the private focus, but there's also the social focus that you open up when it doesn't just affect one person, but a whole lot of people. And we've already discussed this fear of being replaced by AI. It comes up again and again, even in conversations you might have with friends or colleagues. Jan, what's that like for you? With this UX background, I think you've repeatedly had to deal with similar issues in the past, because they always come up, no matter what technology you introduce. Processes are streamlined, accelerated and automated, tasks are eliminated. How do you deal with these concerns when you talk to people and say, hey, here's a new technology, let's have a look at it and then suddenly people come round the corner saying, oh, I'm being replaced here and I'll soon be redundant? With technical clarity and analogies. Technical clarity is perhaps described by the sentence that I like to use in lectures and in the Masterclass, that ultimately they are just on-board computers. They don't understand anything. No context, no nothing. They calculate syllables.
[9:33]Strictly speaking, not even syllables, but arbitrary parts of words. And they do this so exorbitantly well using probability calculations that we come up with humanly conclusive answers. Yes, that's how we form them. We form the conclusion. We are the conclusion.
[9:50]Yes, yes, yes. And we attest to that. Completely wrongly. We totally buy into it because people tick the way they do. If we recognise a human pattern that doesn't seem strange to us, in this case chat windows, especially the plausible response to our input, then we anthropomorphise. We think that's kind of cool. But we remain word computers. And in this certainty that they don't understand anything, that they only calculate skilfully and suggest to you that this calculation task, this word calculation task somehow makes sense, I think it has already become clear that you as a human still have the upper hand in terms of expertise and, above all, methodological competence. Things that the thing can calculate and guess well, but never as accurately as you would. That's where I'd like to intervene. I fully agree that this machine understands and calculates nothing. It does that really well. I can make good use of it if I tell it that it means something useful because I recognise meaning in it. So I make sense of what it gives me and can actually form new thoughts and reflect on them. Someone else, let's take my client or my employer, who looks at what I do with the machine and what the machine does with me, if the now....
[11:10]If you attribute competence to the machine because it is not a personality trait, but an attribution and say that the machine does it much better than you, then I already have a problem.
[11:23]Then I can think, well, I'm somehow more intelligent than the machine, but unfortunately my employer has to think the same of me. I don't have an employer, so I have fewer problems, but I have less. It is precisely this argument that brings us to the title of this episode, the social explosives. Because if you look closely, it's actually Dunning-Kruger on two levels. Dunning-Kruger in a nutshell: I pretend to be able to do something and I can't really do it. I just think I can. The Dunning-Kruger effect. I don't even realise it, exactly. I exaggerate.
[11:56]And the first Dunning-Kruger level here is that the model claims that it can do something that it can't actually do. And the second Dunning-Kruger occurs with your colleague who uses the thing and now feels empowered by this result and also attests to a truth, a competence, a quality. It's not true on either level. The truth is, and here we come to the analogy, AI is nothing more than a really good cordless screwdriver for a craftsman.
[12:24]But the cabinet has not yet built itself. The cabinet's construction plan is also far from clear. That's where I would come in again. That may still be true for the first generation of AI or chat GPT, as we all got to know it two and a half or three years ago. But if we now look at all the automation tools, AI agents that are now coming around the corner, that can really do things automatically, where I perhaps give the order, hey, do this and do that for me and things happen automatically in the background and magic
[12:56]
The social explosives of AI
[12:52]is involved, then it's already the case that it's not just a cordless screwdriver. So then I ascribe real expertise to it and it can actually change things in the real world. From my point of view, I think that's the next step that we've just seen. As long as they were just chatbots and I have an interface or I talk to the thing, then it's more or less in the mind, there's a lot going on. But if I now really have agents that are connected to the real world, so to speak, that can carry out transactions, that can somehow process payments, purchases, trigger orders, negotiate with other AI agents and the like, then that's another level for me and.
[13:27]No longer comparable with the Appenschrauber. On-board computers with maximum freedom remain. Incidentally, nothing will change in this regard in the near future. This basic foundation, that ultimately it really is simply the stringing together of words using probability calculation, also at the reasoning level, i.e. also at this level that you are now describing, that they make automatic decisions. I no longer believe that this technology will simply evolve. It needs a new one, so to speak.
[13:56]Gene AI, gene AI at the moment, doing the same thing again at this level. It's not a logical progression, it's not, we'll tweak another billion parameters and then we'll be ready. Because it's simply a completely new technical hurdle that what you describe has been realised and that we really dare to run these things, even for relevant processes and not just for writing an email. And even then, a lot can be broken in communication. It's a jump on this trampoline that many people would like to predict right now and others would like it to be there already, but it's not. Frank, do you use an agent, really in this sense, the agentics? I'm not at that point yet. But it's an important point for me. So both, let's take the fact or the fact or just the opinion or the assumption, that is and remains a probability calculation. I can't tell from the word alone whether I would understand what it does. I say, it's such high maths, I have no idea.
[14:59]But what these machines can already do is really helpful. Be it in small work steps, be it in very specialised steps by scientists and researchers. So everyone can use this AI to really make their lives easier. I don't want to go to the big societal level, so to speak, where industries disappear and the business model changes. It's quite normal that this changes over time and has to be adapted. So it makes sense to assume that this machine is intelligent or helpful.
[15:34]Because I actually get the answer that helps me. I get a new text, I get a new translation, I get something sorted out that I wouldn't otherwise get done in years. And let's assume that this turns out to be the case, even in a few years' time, that this AI development won't actually bring anything more. Nevertheless, it is socially explosive for a society like ours, which has become well established in its engineering skills and which simply struggles a little with this application of technology. Once again, that would simply be our sluggishness in being curious about new things. I find that very exciting in what you say. One thing, it's already damn helpful. In brackets, only if the person remains in charge. Both in terms of expertise and process, so that you can verify the result. In a masterclass, a participant who came from the communication corner of the company once said that this thing makes word vomit understandable. And that brings us back to this monkey wrench. The AI has that. Or it has a really good sense of language.
[16:50]The point was that, as with this cordless screwdriver, she turns the screw in faster, that she can let her wild thoughts take shape. And thus faster, so that it can continue working itself. This is exactly the human-AI rhythm that I also recommend. For now and, to be honest, for the next stages too. The social explosives, the whole episode, the whole endeavour, the whole project that we're getting together here, has generated a picture in my mind of how I imagine AI as a colleague. And I realised that it's actually totally Adi. He's such a lickspittle. The assistant or the antisocial one?
[17:27]Yes, we might look at it a little differently for everyone. But I imagine some over-gelled, licked, new trainee who thinks he's got all the wisdom in the world. And has something to say about everything. So a real foreheadfried, the kind of guy you don't really want in your team. Hey you, who's listening to the podcast right now. We bring you a new episode every week in this podcast. You can listen to it too and we need your support. Take your smartphone, leave a star rating and a comment on how you like the podcast and make others aware of this podcast here. Thank you very much and now the podcast continues.
[18:14]But that's where I would come in again, because it depends on what you actually do with this tool. So I'm with you on that. It's a tool and we can use it for our own purposes or not. And that's where the first division starts for me. There are still enough people out there who don't know anything about this AI stuff, don't want to know anything about it, have never tried it out and nothing at all. And they are gradually being left behind. And the second thing is, as you say, we have to decide what do I actually want to do with this tool? What kind of tool should it be? What should the assistant or colleague look like? And yes, there are a lot of people who do these entry-level things, overwrite texts or generate ideas, whatever. The next stage, and we've already discussed this.
[18:54]I don't know what kind of IQ the AI has nowadays, but I can imagine someone with an IQ of 140 on various tests, I think it has already been certified as having passed many tests and examinations. So if I can hire a colleague with an IQ of 140 for a Schmalenthaler, so to speak, then I don't give them work, in the sense of revising the text or giving me a few ideas, then I somehow have a highly sophisticated consultant or I have someone who really gives me challenges, in the sense of make me better, show me my blind spots, help me to improve in the areas where I'm not yet good, then I hire someone completely different. For tasks that I no longer want to do myself or because I'm not good at them, but to make myself better. Then I have a coach, a trainer, a counsellor at my side who can always push me to become even better. I think that's really the point where there can be real added value through such technologies if
[19:48]
The challenges of working with AI
[19:45]I just configure it correctly for these purposes and use it for them. Yes, but there is a huge difference between output and outcome.
[19:53]The thing produces output, and lots of it, if you ask me. But for now it's just quantity. Whether it's quality and then with effect. That's complete coincidence, it's complete free flight with this thing. Or it's actually being poisoned, that's the situation. Let's stay with this gelled intern for a moment. The stupid thing is that the management is totally in favour of this trained intern, because he constantly delivers output. And that makes them visible at first. And then the colleagues come along and join in.
[20:24]But Frank, what does that do? It creates a uniformity, an AI uniformity. I've experienced this myself in the team and have had to actively counteract it because people are suddenly only chasing these KPIs, this supposed quantity KPI that somehow looks good enough, is shiny enough, even that it would reflect quality. This creates uniformity in the worst sense of the word. Really dumbing down the creative output. And you would say that this is the form of social explosive that we need to focus on. Not so much new armies of unemployed and highly qualified people who are no longer needed, but you're saying that the quality of people decreases when they use AI. Exactly. So I think the bigger problem is that we will end up with a lot of fake heroes in teams. And you won't be able to see through them properly, because you can see the screwdriver in your hand. But you only realise whether they've used AI when the result is three levels further down the line and it's nonsense. Nobody can see that so quickly. And playing yourself up as a superhero just because you think you're three steps ahead with AI also creates a generalised suspicion. It's almost like that, and to be honest, I've also experienced it myself when a result from a team colleague came too quickly, I actively asked which model did you use? I didn't use a model at all.
[21:44]So I wanted to understand where it came from in order to be able to categorise it. And that's the social explosive for me. Let's be honest, there was an industrial revolution and the fact that our job profiles are changing is nothing new. And the fact that we're abolishing entire job profiles through further automation and running in comfort zones is nothing new either. We have all done this successfully as a society many times before.
[22:09]Made, done, behind us. And yet patterns can be seen that we have, so to speak, moved from crafts to services and now to creative professions as a society. So we have, so to speak, cut out fields by bringing the farmers, the children of farmers, into the city and then not having to train them to become wage labourers and assembly line workers, but simply having them hold out for their 14 hours. And then they migrated to the service society, the vast majority of assembly line workers. And that's something you still have in mind when you start at McDonalds and then you teach them friendliness and discipline and keep beating them into submission. But from there, or from the supermarket checkout to the creative professions, it was a bit more difficult and it was a bit less so. And other areas are not opening up. Individual professions are disappearing, of course. And you could say that was the case yesterday. But I don't see a new field for the majority of the population if wage labour is to remain the model of the future. That's where I would see the social explosives, although the solution is not. We wouldn't have that without AI. Admittedly something that I wouldn't put into the future, but does the click economy mean anything to you? Yes, in broad terms.
[23:39]Ultimately, the exploitation of people for pre-training or data preparation and also the alignment of data frameworks for models.
[23:50]And that is currently associated with exploitation. But that would actually be a wage income if it were regulated sensibly. And also a completely new one, which didn't exist before, especially not to this extent. I recently learnt that this is now changing into something else, namely towards highly decorated experts. They are being used to train models for thousands of euros a day. Entire industries are based around the fact that their knowledge flows into these models. It's a billion-dollar business. It's not just about the click as such, which is assembly line work, so to speak, but you look at highly educated people, what they do and what they think, so that you can use that as data. There are both ends. Again, these are professions for a rather small number of people. And I'm with you on that, Sascha, AI is not going to go away. We've already discussed that. The technology is here. It was the same in the past. It won't disappear again. But the hope is, of course, that it will bring more added value and automation. And just as you said, to start with, we can sit back and the value creation will happen on its own. Even if we are perhaps not there yet, but that is the dream of mankind, to get there somehow. And we may then have to deal with completely different questions. Where will the money somehow come from if it is no longer wage labour? And yet somehow we still need it to shop, to survive and to cope with everyday life. And I believe that AI is now massively accelerating the need for us to deal with these questions on a social level. And then there's the competition between systems, we could also say.
[25:19]So there are so many dimensions in there. I don't think we've even dreamed it all through yet. But don't we still have enough fields where it would be worthwhile to automate this with a combination of AI and robotics? Take the care system, for example. Japan is a big pioneer here.
[25:38]They live with the fact that they are cared for by robots. That is culturally recognised. The things even have a soul and so on. They're not so concerned with this ethical moment that we're dealing with here now, which is valuable to us. They create solutions. And that happens and it works. That's also the point where I'm actually less worried about our society, so to speak, because we're still far from being the most advanced in AI and don't need to think about it now, but rather we have the problem that we're starting from a relatively high standard of prosperity across society and are saying that we don't really need such a catastrophe for routines, such a change that AI is forcing, and that we're behaving rather stolidly and reacting rather defensively to it and it's not really getting off the ground socially. So I would still see the willingness to invest in infrastructure as the biggest problem socially. The fact that we simply haven't invested in infrastructure and are not doing so. It starts with schools and teaching, but also with the entire digital infrastructure. That we are simply sluggish or almost lagging behind. Do you share that or is it more of a pessimistic complaint? The fact is, we are lagging behind. I see the reason differently. We were doing too well. In the end, the whole of Germany was in the Innovators' dilemma.
[27:03]The automotive industry flourished, we were the heroes, industry, machines, everything from Germany, exported away. Solar was also our topic once. So we turned this innovators' dilemma into a situation where we are doing well, we are doing very well. We'll still be doing well tomorrow and certainly the day after tomorrow too. Now we are in a situation where we are faced with a huge renovation backlog.
[27:24]Both in terms of infrastructure and the economy. And in this renovation backlog, we are more likely to return to a stable situation, which, at least in my environment, I have the feeling leads people to negate AI because it is one too many for them. Because, and this is also part of the truth, it is simply not a sure-fire success. It is proclaimed as a democratised do-everything. Anyone can do anything. That's just not true. You can still produce a lot of shit with AI. Nothing is democratised. It's the techies' old dream that democratisation goes hand in hand with it. It was like that with the internet and now it's the same with mobile devices. Okay, but that means, if you take the model of the innovator's dilemma as a basis, how do we get out of it?
[28:11]
Creativity in interaction with AI
[28:11]Being creative - humanly creative - with this new tool. There is talk of multimodal models. And what is meant by this is that the thing can think in images, can calculate, can calculate in words and can receive speech or even produce self-speech. A person's multimodality is similar purely in terms of sensory perceptions. We also have a few more, plus our understanding. And we have something, I can't find a word for it, but we have a certain arbitrariness in our perception. Just the fact that we're talking to each other now, the noise on the earphones, the little back coupling, all that does something to me. It affects my computer.
[28:53]And this almost arbitrary nature of the calculation, which leads to new ideas, is now a bit of a human-magic moment for me, something that you just can't teach in a model because it remains in its probability calculation. It wants to solve the most probable calculation for you so that you as a user remain happy. And this becomes clearer and clearer the better they get at it. Do you think that this is something that will become more prominent and that, paradoxically, the more I deal with or use or approach AI, the more likely I am to get my profiling out? That's exactly it. So I always attach great importance to my participants understanding the technical basis so that they can then anticipate what they send in for a calculation task, aka prompt, and what they get back and how they can continue to work with it. That's a nice moment, so to speak, because we're differentiating the technical foundations or at least that a little bit, what Frank you were talking about earlier with agents, assistants and all the other tools or applications that are now available, so that we can shift that again, go into it more and capture the moment here, which has both directions and thrusts.
[30:05]Engaging with AI is important, but not in the sense of running after it, but in the sense of recognising myself. So I don't just lose myself in AI development tools, but I can also use them to recognise myself and my own creativity. That was simply the point where I said, that's quite fitting. I have to look at time a little bit. It's quite humanly limited for me. Not in an absolute sense, but in a very everyday sense. Frank, but you wanted to say something again. Yes, I would perhaps ask Jan a very specific question at the end in the sense of, now the AI is here, it's not going anywhere, I think we can assume that. And humans are there too, they also want to be justified in some way. What do you think the ideal interplay between humans and technology, or specifically humans and artificial intelligence, looks like? How can we get this well organised, so to speak? Above all, we should put aside our fears. Fears that our job profile could change too much or no longer exist. Yes, we can't change it anyway. So at the end of the day, you can only keep the situation under control for yourself by maximising your understanding and by experiencing, shaping and knowing. What I would like to add, which I sometimes find a bit annoying, is that it is a bet determined by others. We never woke up and said, oh, GPT, we want to work on this all day.
[31:26]Nope, the are the Tech billionaires from America, the have these Bet run. Also for these Technology. The have much invested, the can from this Bet even not unload. The goes Yes until pure in the large State apparatuses. But in this Setting drive we, think me, on best, when we the Best from it make. In addition belongs to actually try it out, hands-on, not in Theory read on and with Interest, with Curiosity. And there is it then but a democratised Tool, because offside this 20 euro month can it each for itself open up, the would like to. Should the each do, the is something, the goes simple not more away. A second Part the Truth, perhaps as Transition for the second Part, the must one also clear be, OpenAI and Co., the have even none real Interest in on it, that you, that me, that we better become in our Job. The want commercialise, by them you in the chat pack and you one Journey sell, Clothes sell. The want you full exempt, commercialised. The want yours Consumption control. It goes again only in Direction Advertising. Everything other is for the completely second, third, ten-ranked. It comes also therefore, that we as Users this Technology the Group shown have, we take the to the Self-optimisation, for the Life counselling.
[32:36]The are the Top categories, with those we these highly exciting Technology use. Of course, what should the because different make, as to say, everything clear, then sell I you but about Stuff. The three best Goals, Purpose, Life counselling, Coaching.
[32:49]
Conclusion and outlook for the future
[32:46]And it belongs to but ins Business and in favour must we itself provide. The is a good Point for the first Part here, the Hook on it to make. I find, it has both brought. Once, we should us so that employ, because us nothing other left remains. And then is it one clever Idea, the with full Motivation to do and the Best from to make. The means also, here to Listen of the Podcasts immediately test and not simple the Next approach. Jan, the means namely also, we would you listen again and then have we a little more Time, because we then the Technology learnt have and then become we the full Time have, at us also really a little the Technology to look at and to watch, with which have we it there to do. Very with pleasure. Many Thanks to first for today. Frank, you one good Time also yet. Thank you, Thank you. Yes, likewise, Thank you.
[33:36]Until to the next Times. We listen us again. Does it good, her both. Ciao. Bye bye.
[33:41]Yes, the was our first Impact in the small Podcast series Good through the Time, new Technologies in the Mediation and for the Consultancy. With Jan Groenefeld. We have us a little approached, had but not sufficient Time. The be us checked. We had with the Technology on Beginning a little Difficulty, but we have clear listen out can, work out can. Where are the Advantages and with which Focus should one itself the AI technologies approach and that it really sensible is, it to do. We must not us inferior feel, when we the do and we should also not to long wait, until it us somehow to the absolute Necessity becomes, but with raised Main and in good Position itself with the Possibilities deal with. Therefore speak we here in the Podcast in addition, therefore make we ours AI compass here with INKOVEMA, so that we really ins Do come and the Technology master, because we are there as People on Pusher and Design. Many Thanks to, that you and her here with thereby maintained at the Podcast.
[34:45]When her Interest in found have, then subscribed but the Podcast and says also Colleagues and Friends Notification. Leave a message a Feedback and one Star rating, the Helps us of course, the Podcast furthermore known to make. For the Moment with best Wishes say goodbye I me. Until to the next Times. Comes good through the Time. I am Sascha Weigel, yours Host from INKOVEMA, the Institute for Conflict and Negotiation management in Leipzig and Partner for professional Mediation to the Coaching training.