INKOVEMA Podcast „Well through time“
#185 – Move 37
New technologies for mediation and conflict counselling. Part 6
In conversation with Prof Dr Tim Bruysten
Prof Tim Bruysten teaches Business Transformation at the Fresenius University of Applied Sciences in Munich. He is the managing partner of richtwert GmbH as well as a partner and investor in other companies. His mission is to accompany and accelerate companies into the future. He has been self-employed since 1997 and has supported well-known national and international clients.
Well through time.
The podcast about mediation, conflict coaching and organisational consulting.
Content of the conversation
In the podcast episode „Gut durch die Zeit“, Sascha Weigel, Dr Frank Termer and Tim Bruysten discuss the impact of digital transformation and artificial intelligence (AI) on society and companies. A central topic is the „Move 37“ from AlphaGo, which shows that AI can develop creative strategies beyond human ways of thinking. The discussion highlights the challenges and opportunities of digital transformation and emphasises the importance of authentic visions and missions in companies. It will highlight that mediators should play a proactive role in change processes to navigate conflict and future-proof organisations. It also addresses the accelerated pace of technological development and its societal impact, including the possibility of AI independently developing new AIs. Finally, the role of mediators in conflict counselling in the context of technological change will be discussed.
Transcript of the entire episode
[0:00] But this "I am" and then there's a job title and that's just starting to falter. The question is not whether it's good or bad, but how did we actually end up there? Welcome to the podcast Gut durch die Zeit, the podcast about mediation, conflict coaching and organisational consulting. A podcast from INKOVEMA. I'm Sascha Weigel and today I'm here in the podcast studio with Dr Frank Termer on the new technologies for consulting. Hello Frank.
[0:27] Frank. Hi Sascha, I'm glad we're here together again. Yes, Frank, we've already announced who we're talking to here today about the new technologies. You've practically brought us this guest. What do you think we want to talk about today? What can we expect today when we ask Tim Bruysten what all this has to do with? Yes, absolutely. So what can we expect? I would say with Tim you can always expect the unexpected. I don't know anyone who comes up with so many things and ideas, who is so spontaneous and can link things together like Tim. And with such depth and expertise in this whole field of digital transformation. I've known Tim for a few years now and we've met here and there in a professional context. And whenever I talk to Tim and listen to his talks, I'm always completely blown away afterwards and have to sort out my thoughts because he's given me so much inspiration. It's just sensational. So it was very clear to me when we thought about who absolutely had to be on the podcast about new technologies and change. It could only be Tim, really. And that's why I'm very pleased today that it worked out and that we can talk to Tim today. I'd say we'll start by welcoming Tim. Hello, Tim, hello, yes, hello everyone. Thank you for the kind words.
[1:38] We'll say or hear a bit more about you in a moment, because you have a really interesting CV and you're really into the topic. I think what's important about the announcement is that we probably won't be addressing specific tools or issues for the self-employed or for consultants today, but we will, I hope or imagine, have another We reach a height where we can at least get an idea of an overview of the entire field. And as diverse as that always seems and as overwhelming as it is, it means we have to reach a pretty high altitude. Frank, or do you have another idea for the next episode?
[2:21] Absolutely. I somehow wrote down 40 questions that I probably won't even be able to ask. But with Tim, we actually have someone here who is not only very local in the business world, so to speak, but also has an international perspective, who not only theorises about things and looks at them from a scientific perspective, so to speak, but can always combine this with a very practical approach and simply brings an incredible amount of experience with him. While he is travelling with his clients and customers at home and abroad and really helps companies to really use this digital transformation and this AI revolution for themselves and to break new ground. In this respect, that's exactly the point, I think that we need a certain altitude in order to simply be able to map this diversity and this holistic approach. Tim, we've already talked about you. Who are you? Is it true what we have described as our impression of you? That's a high expectation that I have to fulfil here, but I'll do my best.
[3:15] Tim, maybe a little bit more about yourself. Who are you and how is it that you, for example on the board of the Digital Networking Charter are? I also read that you are b with your own Wine label and own Champagne label. So these are already spans. Yes, exactly. That's more of a side-business catering corner for me. But maybe first to your first question. Well, I'm Tim. I've been self-employed for 26 years. I've built up a small business start-up organisation. I've been a professor for 13 years, I think. I'm currently teaching global strategy at the SK in Paris and I'm on the board of Carta, as you said, which has the task of networking digital players and business, civil society, research and education, politics in Germany, but also throughout Europe, and which was born out of the IT summit back then and which is now the digital summit and where we are also looking forward to the upcoming digital summit.
[4:18] This means that we can actually simply take a flight level with you, at least Europe-wide, where we can see where Europe stands and where it wants to go, but also basically take a look at very specific issues of digitalisation and AI development in a very practical advisory capacity. And that's exactly what we want to do. I'm going to start very, very pointedly, because you also explained this in a recent article. You used the term or the title MOVE 37 to describe the special thing that you want to work out when it comes to AI and digitalisation. What is hidden there? Why can we describe MOVE 37 as a turning point in history, and I'm probably not understating it. Yes, perhaps it's not an exaggeration to put it like that. The MUF 37 was I think about 18 years ago, there was a famous Go tournament. Go is an Asian board game that is much, much more complex than chess, even though it looks simpler. But the amount of possible moves, the amount of possible logic is extremely much higher than in chess. This cannot be solved with even the most modern supercomputers. They play well, normal Go software is good, but it's not at world-class player level, as is the case with chess, for example. It's been there since Big Blue won against Kasparov in 97.
[5:48] It was actually clear that good chess software can no longer be beaten by a human. Not by anyone, at least not statistically. And that wasn't the case with Go. And that's actually still not the case when you're talking about classic software. It's not predictable in the way that you could ultimately do with chess. It can be calculated, but the number of possibilities is still dramatically too large for even modern supercomputers. We then started to train an AI to play Go and realised, shit, it works, it can do it, it can do it really well. And then let it compete against people and it won and won. And then there was this famous game where the move 37, the 37th move, was where the AI AlphaGo played against Lee Sedol, one of the world champions in Go at the time, and beat him. But the special thing about this move 37 was, if you look at the situation, you can still see it on YouTube, we watched it live at the time, got up at night because it was Korean local time or something. You could see this move that the AI made. And the commentators and also what was going on outside and how we talked about it were like, but that's a funny move. And the commentators were more along the lines of, the AI isn't that good yet.
[7:02] And I think the only person who understood it straight away was Lee Sedol, who was sitting opposite the AI, so to speak. You can see the Jaw Drop. The face falls, yes. He goes out and smokes first and you see him nervous. There’s a camera image of him walking around nervously outside. And then that was analysed and it turns out, and that's the special thing about this move, this move that people, even in the documented history of Go over many centuries, would not have played like this. It's not a move that was somehow included in the data that the AI was given for learning. In other words, the tried and tested methods that would be used to teach the game, that would lead to victory or success, would not have come up. It was out of the realm of the imaginable. Exactly. AlphaGo has changed the way Go is played. So an AI has made a creative contribution to this board game. In other words, is it now played in the same way or is it now played differently? Humans have learnt from the AI how to play the game better. And the AI has achieved a creative feat, which is the special thing here.
[8:12] Creative in the colloquial sense. If we look at it more closely, we would have to say emergent performance. Of course, like every Go player, the AI learns by watching other Go players, so to speak.
[8:25] So she looks at the moves. And she has then reached a level of abstraction from these pieces of the puzzle, so to speak, where what she has done there can no longer be explained from the components that she has received. In other words, what you typically always add in physics. And that's highly interesting and that was roughly eight years ago, when it was, I can't remember the exact date right now, it must have been nine years ago, eight and nine years ago, in any case a long time ago, for, let's say, AI-informed people, that was the moment when they said, stop, what was previously said in science fiction is now becoming technically feasible. An AI will not only be able to learn things via neural networks and then reproduce them, but an AI will also be able to be really creative and add something new to the knowledge space of humans. Not for everyone, but for a certain type of expert, this has been a given ever since. And people were actually waiting for something like this to happen. And then somehow nothing happened for a few years. There is something that has happened more often in AI development, there was a so-called AI winter. There was something under the surface, but you only saw the ice cover. And there was little progress. And the next thing that really came to public attention was ChatGPT, almost three years ago now.
[9:44] Or is it almost two years now? How long is it now exactly? Two years, right? It would be two years, exactly. But maybe a quick addendum, Tim, if I may just poke my head in. I can still remember that moment with AlphaGo quite well. Maybe I'll be happy to correct myself if I'm wrong. I think there were two AlphaGo. One AlphaGo and AlphaGo Zero. The AlphaGo really had to learn with human data in order to understand the game. And then there was also this duel. And then they took this second one, AlphaGo Zero, which didn't receive this input, but instead the AI played with each other and only learnt how Go actually works as a result. And then there was always this leap where they said, we haven't given you anything. It couldn't do any data or anything. It learnt the game itself by playing with the other AI. And that's where this Move 37 came about, where you said, how can this actually happen? That was also special from a technical point of view, where we said that the typical approach of taking data, giving context, adding everything you have and then training the system worked in a completely different way. You basically put down an empty box and said, here, now learn for yourself. And she worked it all out and taught herself. And from my point of view, that was always a potential milestone, also from a technological point of view, as it didn't exist before. We talk about model-free and unsupervised. So model-free means it doesn't get any instructions, it doesn't know what a Go board is, it doesn't know what the stones look like, that there are stones at all, it doesn't know what the rules are, it doesn't know anything, Zero, that's where the name comes from.
[11:08] Unsupervised This means that while she's learning what she's learning, she doesn't get any support from outside, she has to work it out for herself. And that makes her even better. Exactly, there was a phase where they said it took 18 days for an AI that knew nothing at all to be better than the best AI to date. It was better than the best human players. So there was also, they also competed against the world's top players. Five or six of them, I can't remember exactly, played against the AI at the same time, so they played together against the AI and didn't stand a chance. So the paradigm, so to speak, the combination of man and machine, which was better than any computer in chess for a long time, where people still hoped that if the human brought in the impetus, it would be even better. That's how it was pushed there too. If we also think in terms of your topic here, what has basically happened here is that the anthropocentric view of the world has at least begun to falter. We have this view that we as a society, as companies, as families, as individuals have this view that we as humans are somehow at the centre of the universe and we explain the world. We create everything that is somehow needed in the world. We now have competition. If we now look at who can deal really well with the big language models and what's there, we can get things out of them that are crazy good.
[12:32] And yes, of course there are many discussions about how good this is in detail at one point or another. And of course, at the moment, a person still has to initiate it for something to happen. So the thing is not alive and has no agency of its own and doesn't do anything on its own. Not anything yet. It's just a question of when someone builds a thing that starts doing things of its own accord. You mustn't forget, I sometimes hear, yes, then we'll regulate it. Well, we don't regulate it at all. I think these are relatively helpless attempts that will collapse like a house of cards, because there is so much extremely good open source software distributed worldwide. We can play European Union all we want. We can't catch this thing worldwide, and it only has to be one out of eight billion people.
[13:18] And many millions of companies. There only has to be one person who cracks it somewhere at some point. And then it's over. It's not just one person trying, but there are certainly thousands, tens of thousands worldwide who are trying to drive the AI topic forward. Perhaps there will be another AI winter, where development is once again under a blanket of ice for a while and everyone is now urgently waiting for GPT-5. That would be the usual game of annual renewal, but in the end it would only be something engineer-like, where someone introduces a new feature.
[13:53] And until we get to the point, which already exists to some extent, that AIs are starting to design new AIs themselves. And that's the super exciting point, which is already on its way. If you look at Nvidia, they clearly say that the chips we are currently making are not something humans could build. It's an AI that builds and optimises the chips and these chips are used so that AIs can run on them and AIs start writing software and AIs start writing AIs. This is, of course, a process in which many, many, many people have to keep giving a push in this cycle, but which is running faster and faster and where there is less and less human influence and less and less and less and less human influence and at some point you can easily imagine that we have an AI that creates a baby AI, then deletes itself, the baby AI improves again, improves, mechanisms are known how to do this And the next generation, the next generation, the next generation. And then the question is, how fast are these cycles?
[14:51] At the moment, we don't really know. Some months, eight months, some say, the AI improvement cycle is just about until it's really significantly improved again. These are short winters. So that immediately brings up the topic of acceleration, which you described as non-stop full throttle, so to speak. That can be very resigned for the individual to say, well, if that's the case, I'd rather go on holiday again and do something else. For counsellors, mediators and coaches, of course, if I put the whole thing under the heading of identity, the task suddenly becomes clear.
[15:25] So what we've also incorporated into our work so far as identity, questions such as conflict resolution, where identity issues are always involved, that seems to me to be a completely new dimension and, from that point of view, AI as a factor. Taking a topic where it's not just about buying hardware and dealing with it in terms of software, but thinking about what it means for us? So we immediately get into philosophical questions. Yes, of course, just imagine you have a technology, so perhaps I should expand on this scenario a little so that what you've just said seems even more drastic. I wanted to avoid this drastic approach for a moment, but Frank, you're with us, you're with me. Yes, yes, watch out. I don't think we can avoid it, Sascha.
[16:11] Just escalate this thought to the end. Do a thought experiment, build a scenario. It can turn out that way, but it doesn't have to. The future only consists of images and ideas. Any kind of idea of the future is a scenario that may or may not materialise. And one of these diverse possibilities is that we will have technology that continuously improves itself at an accelerating pace. And if you look at the impact AI has already had on other technologies in recent months and years, i.e. on nanotechnology, computer chips and synthetic biology, where there has been incredible progress that would not have been possible without AI, where we are talking about whether it is realistic to cure a large proportion of cancer in the next ten years? Is it possible with the synthetic biology that we can create with it, for example, to make a major contribution to solving climate change? Perhaps, as I said, scenarios. But it's not unlikely that this will happen.
[17:12] Now we are realising that AI is starting to take care of nanotechnology. So at the moment it's still driven by humans. But now let's imagine that these are automated improvement processes at some point. How many people still work in a factory today where a complex product is manufactured? Then we can imagine that in a few years' time, something like this will happen completely without people. Completely without people. And the factory will also be controlled by software AI. Let's imagine that these improvement loops are faster, faster, faster and drive each other on. Better nanotechnology will allow us to build better chips, which in turn will produce better AI.
[17:45] That means that more and more areas of life, from factory workers to CEOs, there will be an AI or a software or hardware robot that can do it just as well or perhaps even better. And even if it can't do it just as well, it will be so much cheaper that the market will force us to move in this direction, so to speak. As I said, that's one scenario. And in this scenario we now have people, we come to your question of identity, who say, yes, but I am, yes, my identity is that I am CEO, that I am a software engineer. I'm a doctor, I'm an architect, I'm a lawyer. I am humanistically educated. I am committed to enlightenment. Yes, exactly. But this I am and then there's a job title and that's where things start to wobble. The question is not whether this is good or bad, that's another level, but the first question is, how did we actually end up saying I am and then there's a job title. If I take up the parallel from earlier, the parallel with Move 37, that would mean that the term or what we are implicitly always pushing here, the topic of progress, can become a completely different logic. And the question or challenge is, do we accept this? Do we accept a proposed solution that is completely outside our logic of progress and development?
[19:13] Exactly, I agree with you. But if you look at the problem from the point of view of a sociologist or a strategist, I'll shorten it a little, then the question is, who makes a decision at all? Who can make a decision at all? Can society make a decision about its own progress? That is a big question mark. From a sociological point of view, from a game theory point of view, we have to say, yes, there just has to be one person in the world who doesn't stick to the agreed rules, who will force the market to ensure that everyone doesn't stick to the rules. Typical prisoners' dilemma problem. Yes, we can use regulation, we can use security to mitigate the sharp edges, and we should do that. We have to think about something, how do we organise prosperity, how do we organise our lives, how do we organise identity in a potential future where we will certainly see a different form of social structure. So the majority of scenarios… see it that way? Of course, we also have scenarios that say climate change will kill us all in the next 25 years. Then it won't. And in between there are of course also some shades of grey in the variety of scenarios. But basically, there are many scenarios that point in this direction, where someone can say that I contribute to the well-being of my family, my group, my circle of friends, my company, because I am a doctor and I can do this and that, that this is becoming less and less important and that we need to find a new form of identity.
[20:36] There's a lot of things in there, Tim. I'm actually trying to sort it out a bit for myself. On the one hand, this speed, acceleration, the time dimension that you just mentioned. Perhaps you could give us another perspective on the dimensions in which we think that societies are changing, that this AI revolution is really taking place and making itself felt in society as a whole. I believe that there are still many areas of society today that are virtually untouched by it. They are perhaps aware that something is happening through the media and the press, but are perhaps not yet affected by it themselves. In this respect, it's a bit of an exclusive discussion that we're having here, which is taking place at the very front end of the spearhead. At the same time, we always say that professions and job profiles will change because the tasks will be different. So maybe I'm no longer a doctor or a mediatrist or whatever. My activities will be different. Maybe the term will still be there somehow, but what I actually do in terms of content will change somehow. And the question is, of course, how willing we are as a society to go along with this change. I think you quite rightly asked the question of whether we can or want to decide this at all. I always notice that there are a lot of forces of inertia that somehow try to hold on to the status quo or somehow want to go back to what may have been better in the past or was perceived as such and say, wait a minute, things are happening very quickly, we somehow don't really understand it, we have to protect ourselves from it, think about the risks. There are so many loose ends right now, maybe you can sort them out a bit and perhaps also give us an idea of what would be possible from your point of view.
[22:04] So what you just said is totally understandable. And it's also human, it's okay. Not everyone can be totally immersed in technological potential. There are also people who need to be experts in something else. That's the case at the moment. At least in the current society. That we also see a lot, let's say, in very human terms. Fear of change, fear of uncertainty. At all levels in companies or in any form of organisation, right down to families. It's completely human, understandable and okay. The important thing is that it doesn't help. We have to take it seriously that people there are afraid or have a different idea. That must be taken absolutely seriously. And at the same time, we have to say that we are in a game-theoretical dilemma. You can't change that by wishing and also by wishing for a European Union. It is here to stay. And in all probability, this will lead us into an acceleration of the acceleration that we have already experienced in recent decades. There are various terms that are mentioned in this context that need to be taken with a pinch of salt, because they have also been twisted around by all kinds of conspiracy theorists and used for their own purposes, where they don't really belong. But if you look back a bit and see where the term comes from, there is a scientific basis and if you take that, you're okay. At the same time, these terms have also been used by science fiction to tell great stories, where you can sometimes be inspired by them and sometimes have to say, we have to be a bit careful here if we want to stay within the bounds of realism.
[23:31] Nevertheless, there is this concept of hard take-off, for example. We've just talked about these cycles, acceleration cycles. It runs through six months and the same thing that we managed six months ago, we'll manage in 5.5 months afterwards. And then we need another five months, and then we need another four and a half, and then we need another four and a half. When is this cycle so short that it is no longer perceptible to people? Then I would say, when many say now.
[23:52] Yes, how short can it be? I also believe that these months of time that we have now are already too short for many people, so in the way we have organised education for people, the way we organise the exchange of information in society, it is already too fast that what is happening right now is below the threshold of perception. If I take a quick look, I don't think German society is the best example of digitalisation, but 2020 was the time when everyone got the hardware to be able to hold zoom and video calls, even in a professional context. And three years later, two years later comes AI. So that's just one topic alongside quantum and spatial computing. Everything simply has to be learnt all over again. Yes, exactly. And there are now meta-levels coming in. So not just lifelong learning, but the meta-meta-meta level of lifelong learning. So I believe that for this transitional period of the next, let's say, probably 10 to 15 years, I think we need a lot of very good mediators who can simply help organisations and companies in society to get through the conflicts that are foreseeable.
[25:01] But we also need mediators who have an understanding of this trajectory, of this development potential. It is foreseeable that technology will transform society. When exactly, how exactly is a crystal ball? I like to fantasise about it, it's fun, but you have to remember that it's only ever one scenario that you're talking about. There are 30 or 300 or 3000 more, as it could be. What they have in common, this scenario, is that it will change disruptively. And disruptive in the literal sense, not in the sense that we have heard time and again in marketing contexts or somewhere like that in recent years. Yes, that was disruptive and everyone is, yes, that's disruptive again. No, really disruptive. So French solution disruptive moderate.
[25:39] In other words, mediation or what you mean now with mediators in the sense of mediation, there is this future that we are forced to take, so we don't know which one, but we have to take the step. We can't stand still and say, I don't like it. I have feelings of displeasure, I have to cry, I want to scream, I lie down on the floor, I stay here. That's not possible. That can also be good for a moment. Yes, probably. But as you say, the dilemma is that it doesn't help. Either it goes on, you're still here, or it goes on in any case. And you think this mediation function is an important task, to say that there are people who experience transformation from different perspectives or in different situations. You don't have to switch on a computer, but you experience them differently than if you simply walk through the streets six months later and realise that something has changed. It's not time to give a little review on Apple Podcast or on the podcast catcher of your choice. Thank you very much.
[26:38] Actually, everyone wants progress. It's just that it's often not associated in this way. And I believe that even for many people who are not experts in any particular technology, it's difficult to see the connection in a nutshell. These are highly complex topics. But if you break it down to what we want.
[26:57] Further increase healthy life expectancy. We want to get climate change under control. We want to cure cancer and other terrible diseases. We want to become an educated, peaceful world community. These are the things we want. We don't want more slavery in our supply chains. Etc. There is a long list of social KPIs that we all want. So for those who don't want them, I have an ethical debate with them. And then the question is, how do we get there? What do we do to get this under control? What price does society have to pay for us to cure cancer? What price does society have to pay to get climate change under control? Now there are all sorts of things that are being said, we need to do this and this and this and this. And there are huge debates about what helps or doesn't help. But actually, climate change is true, it's real, it's happening. We can see it out there. Last year in late August, I stood by the Rhine and looked for the Rhine. And we can see it all over the world, these phenomena everywhere. It's one of those things. The best chance of getting a grip on it is to get a grip on it in prosperity. If we want to tell the whole world that you all have to give up now so that we can get climate change under control, that's a nice thing to say, but the world won't go along with it. There will be hundreds of millions of people who won't go along with it. And what are we going to do with them? Are we going to force them to join in with weapons? Not a viable option for me either. How can we create a world that remains liveable, where we can get climate change, cancer and other things under control?
[28:24] But where we have a future that is a positive future for the children of the people that are there now. And that's a future that can be shaped by technology, nothing else. Tim, when I talk about identity and Frank, we have described this as new technologies for advisors. And that sounds a bit like buying a new instrument. You go to the shop and get a new tool. It seems to me that completely new issues are now emerging for us to deal with because it affects our work at its core. So everything you mentioned that leads to conflicts, Tim, is technologically coloured, so to speak, by this comprehensive transformation called AI. How would you ask? There are two things in there for us. The first, which we've already discussed here several times, is how will our work in this area of conflict resolution ultimately change? Will our work be different? How can we utilise AI? In which phases and what do we do there, etc.? That's one side of it. But what I find exciting about what you just said, Tim, is that the conflicts we have to deal with will be different. When I look at the typical areas of conflict that many mediators are involved in, be it in the private sphere or in the corporate organisational context, where people usually say that the department doesn't get on with them or that I have problems with my neighbour, etc. But now new areas of conflict are emerging. But now there are suddenly new areas of conflict that are driven by technology, where the question is, do I introduce digitalisation into my factory, yes or no?
[29:50] Do I take provider A or B? What does that do to my employee? What does that mean for the works council? What does it have to do at this point? And everything depends on that, as you said very nicely, Tim. On this positive vision of the future, where do I actually want to go? And I believe that, at least in my perception, there is still a lack of clarity in many areas.
[30:09] Both on a societal level, when I look at politics, where I tend to notice that, especially in a European field, but also overseas, there is a lot of backtracking towards nationalities, people tend to look at themselves and say, hey, we have to look after ourselves, we have to look after the country, we have to somehow find solutions for ourselves and not put the community first at all. And I see the same thing in business, in the economy, where many people find it difficult to develop a vision of where they actually want to go. I always hear a lot that we don't want that and we should avoid that, we'd rather preserve this and that and we've always managed to get into a good groove. The only thing missing in many places is this positive vision and this strategic view and recognising the potential of it and including it in the equation. The question is, can we support and do something about it? Or why is it that people find it so difficult to develop such a positive vision? Even though they know and realise that we need it. As you've just described, in many areas we need this North Star of where we actually want to go and we don't want to wander around without a goal or direction. Yes, exactly. I've just described this on a global, galactic level. Of course, this applies to every organisation on a small scale.
[31:17] From SMEs to corporations, from an association to a political party. However, it needs its scale. What we do in our work, for example, are two things, two sides of the same coin, so to speak. On the one hand, we look at the organisation first and foremost and see, we have been researching this for a long time, we have developed very specific analytics, how do you actually find out what is central to an organisation at its core.
[31:43] And we have found in our research that there is something in every organisation that is really unique to that organisation and that the organisation cannot shake off. It can do what it wants. That is always the core of the organisation. There can be as many new CEOs as they want. They can order whatever they want. As soon as they look away, it snaps back. But it's not just this core that snaps back; if you look at it unfiltered, a lot more snaps back. And there are also things snapping back that you can definitely change. And what we have found is that if you take this unique core of the organisation, if you really peel it out and then turn such an abstract metaphor into something very, very concrete in the actual organisation, giving it clear, simple words, then suddenly this core, two things happen with it. Firstly, it's always extremely positive. We have done and researched this with hundreds of organisations. It's always something extremely clear, positive and simple. And secondly, it's much smaller than you think. There's a huge pile of stuff and habits on the outside that can suddenly be discarded with ease. In other words, the moment you accept that an organisation has a true identity within it, the moment you accept that and incorporate it into the company's decision-making processes and communication processes, strategy development, vision and mission finding and so on, everything becomes much more tangible, clearer and easier to understand.
[33:07] More credible, more trustworthy and you can change more in less time. It's a sociological process. That's on the one hand. The second side of the coin is that, as you just said, Frank, you need a clear picture of the future, let's say. And a vision of the future consists of scenarios, so that you say, okay, what are the scenarios that are relevant for us from the infinitely large scenario space? Of course, every organisation has a limited scenario space, a smaller sub-space, in its industry, in its market, in its culture. We need to look at this and then build a trajectory from our own central identity into this scenario space. In other words, to end up with a roadmap that guides you. What we often see, and we are currently analysing this again through many companies, is how do companies write about themselves? And as a rule, what usually happens is that vision and mission often sound very operational and very flat. But what really is your vision? It's not that we're making better software for XY. That's something that anyone can say. If you write it like that, you simply have everyone as a competitor.
[34:15] What is a vision that is your vision, that inspires people for you, that has a common thread for you, that really has something to do with the way you were founded, that somehow reflects your everyday problems, that contains the challenges that you have overcome, that is truly authentic, in the literal sense, not in the marketing sense. That's something you can do a lot with. And what you said, Frank, is exactly right, that you have a vision of the future. If you do it that way, it develops all by itself, it feels a bit too big, a bit too far away. But that's important, because then it lasts over time. Then you can't say that it will guide us for 10 or 20 years. We don't have to think too operationally every three years, because then we run into the problem that it becomes outdated very quickly. And if we think about acceleration alongside this, everything will become obsolete even faster. This means that we have to constantly reinvent everything, re-explain ourselves and create a lot of new work for ourselves. In other words, if you make these things a little more abstract, a little further away and then allow each individual employee, each individual stakeholder, to identify with them personally, where coaches and mediators can be a great help, then you can achieve something that really drives the organisation.
[35:26] Nevertheless, it remains a negotiation process. That means people have to agree on it. So far, I have not understood that the core actually exists, but that it is the basis on which people can agree. No, no, the core is something you can measure. It's not something that people say, we want to have it this way and that way. That's a different level. The core of the organisation is something that can be measured but is not subject to a decision-making process. It's something that arises when the organisation is founded and comes into being. We are now, so to speak, in the field of organisational understanding and would now contradict the approach, so to speak, or the approach that people experience organisations differently and then also identify different cores. You mean, okay, at least from a distance you can trace this back to a core that everyone should be able to agree on somehow because it has been measured correctly. The gatekeeper, the works council, the management, that's what we ultimately have in organisational development even without AI or new technologies. You could now say, okay, maybe you can remeasure that or an AI could now deliver a result, similar to Move 37, where everyone sits in front of it and says, oh well, that's our core.
[36:38] But I wouldn't have thought it now. And then it doesn't work. It's usually the case that when you open it up, people are surprised when you develop something. But it's done in the organisation, which means that this measurement process takes place in and with the organisation. When the result is there, people are already surprised. That's always the case. It is always something that is much bigger and stronger than people previously thought it was. And then of course you're right. Of course, everyone has their own perspective on it and can make their own very individual contribution. But there are also people who, when you peel that out, realise that it just doesn't fit. And that's a good thing. Yes, so there would be consultants or mediators, so to speak, that doesn't seem new to me, let's say, because the idea that there is a core in the organisation is... parallel to the idea that there is a human core, individualism, the indivisible inner self, which is also not recognisable from the outside. And it's important to find it. I mean, that's what consultants and organisational developers have always done. I wouldn't recognise the new move in that yet, so to speak. But of course I can see that it is a requirement, also on the basis of conflicts where something like this is then negotiated or at least the possibility is given that mediators are required and are increasingly being required. That doesn't mean that all the methods that have been used so far are somehow bad, but we have done a lot of research on this topic and I think we have a lot to contribute in terms of exactly how to do it. There is also a lot of hocus-pocus out there. The endeavour to deal with it is fundamentally something that is good and right.
[38:08] We don't suddenly have to reinvent mediation methods. The point is that the field of application will now develop. And mediation will of course have to adapt. Sure, of course. But the point is not that we're saying mediators now have to use AI instead of A, B or C. They're welcome to do that. Yes, so if that's helpful, please let them do it. The fabric that mediators weave will now be a different material. So speaking as a metaphor. Yes, I find that interesting.
[38:35] Worthwhile thoughts, Frank, to follow up again now. Sometimes we're really talking about what new technologies can help us on a technical level. We also had the question again today at the beginning, where will this lead us in terms of the interpretation of new technologies called AI, what it will do for us as self-employed people, as consultants, as people who are now being called in to deal with conflicts, open brackets, want to, close brackets. Where are you at the moment, Frank, with this idea? Yes, I actually see us as a group or as a profession, if you like, as having a much greater responsibility.
[39:11] So to actively address this issue of willingness to change and to really position ourselves as active facilitators. In other words, not just being called upon or coming to mind when it's too late, so to speak, when the conflict escalates, when there's perhaps a deadlock in the organisation or in the private environment and you think you can't get out of it, but really seeing this active pursuit and accompaniment of change perhaps more as an issue. And this is due to a technological change that is simply taking place, where the question is no longer whether it will take place or not, but whether we are involved and take an active part in it or whether we leave it to happen on the outside and then have to accept what we actually find ourselves in. I think that's a big change in the way we see and view the topic of mediation and conflict management, which we perhaps don't yet have in the profession. Then we would at least, if I may interpret this as an identity facet, which has certainly played a role in the history of mediation, namely that we are a bit enthusiastic about technology.
[40:11] How should we put it? Not anti-technological, but we want to get back to the human. So we sit down at a table and so on and conflicts that are sometimes caused by technology, we bring them back to the real and genuine and not to the artificial. This facet would actually have to take a back seat and we would rather say, no, because there is no escaping technological development, we have to actively tackle it. So I would see that as a challenge for mediators.
[40:37] The age of exit, so to speak, is over, where you can say, fuck you all, we're going to an island now and we're going to set up a commune and we're going to do real life there, so immediately. That would be over then.
[40:49] Yes, there may well be one or two people who do that. I think what you've just said is very important. I think you put it well. At least that's how I see it. You said that mediators used to be called in when there was a conflict. But now there is an immanent conflict. It's actually been around for a long time. But it is now taking on a completely new intensity and speed. We've spent almost an hour or 50 minutes philosophising about it in various directions. This immanent conflict of organisational development, which is driven by technology, is taking on a new sharpness. And AI is perhaps the central element in this, but there are many other extremely hard technologies on the outside, extremely destructive technologies that will simply come now, whether we like it or not, regardless of what the regulator says. It will come, maybe it will take three years longer, maybe it will happen three years faster, I don't know, but it will come. In other words, this immanent conflict that lies within the organisation, that is caused by the technology, will end up in people's hearts, will end up in their brains, they will take it home with them and so on. To uncover it, so to speak, to point it out and to help people deal with it by pointing it out. This is something where I believe mediators can make a significant contribution in the coming years. It's just starting to happen gradually. You can already feel it more strongly here and there. And it will become more noticeable in the next few years, let's say, as we've just heard the term "hard take-off". Depending on when the curve becomes so steep, we'll have to see exactly when that happens.
[42:13] Some people say it's happening right now, I don't know, maybe in 10 years, maybe in 15 years, maybe in 20 years, but I think we should expect it to happen faster than we would like and that we, especially if we also.
[42:25] If we want to secure and build sustainable and long-term prosperity and positively develop our way of life, then we need to start preparing now to take the steps to make organisations fit for the future. In the sense of also dealing with the unexpected, i.e. two Black Swans every day. An organisation has to be able to deal with this. And this is a conflict issue where the mediator should be ready and waiting. It would mean that you can no longer play this game of 'between the lines', a table is set, we both or all know that capitalism is to blame, the organisation is to blame, alienation from work is to blame and technology is to blame. Doesn't help, may be right, but it doesn't help. We need a different understanding of how we conceptualise the causes of conflicts between people. I actually think that's a real task, it's not even a task. Yes, but anyone can do that, can't they?
[43:19] Yes, but even that has to succeed first. Yes, of course. Exactly. I don't think anyone has an answer right now, but it's important that we have a dialogue. And in this discourse, there are of course people who are further ahead in one corner and others who are further ahead in the other. There will also be differences. But that's where we have to argue now. We have to ride and wrestle and a social discourse must be able to grow out of the discussion. But we also need all these small discussions to start with and people have to get on board and either like it or dislike it. And we need to throw arguments at each other in a positive way. But it's also allowed to be a bit of a row. Yes, I think it's important to feel that too. And I think you also have to recognise and feel the fear that some people have, the enthusiasm that others have, and experience that emotionally. Tim, I'm well filled. I really have to let that settle in again. Frank? Yes, you, I told you at the beginning. So the conversations with Tim are always so horizon-expanding and they really do provide you with a lot of food for thought for a long, long time. Tim has absolutely exceeded my expectations here again. Thank you, thank you, thank you.
[44:29] Then given the time, we'll just do a hard cut here, a hard take-off and see what feedback we get. We'll let you know, Tim. I'd love to, I'd love to. I'm looking forward to it. I don't think this will be the last time we talk about this in this context. I'd love to. I look forward to seeing you again somewhere soon. Keep wrestling. I'm keeping my fingers crossed for the move that's coming up for you. Good luck. See you soon. See you soon.
[44:59] Frank, we'll just leave it at that. If you've managed to listen this far, you'll be glad that there's now time to digest. You definitely need it. Thank you, Sascha. See you soon. See you soon. Thanks to you too. Thank you both. Ciao. That was our conversation with Tim for now. It was the way we couldn't have foreseen it, but had to foresee it. It's mindblowing. First of all, this idea of thoughts escalating, as he put it, so that's what actually happened. And that it's a completely different approach to the topic of new technologies for consultants. That it's not about a tool, whether I need it or not, but what do I do in the face of this dilemma that I can't get round. And that doesn't just apply to us, but actually also to those we deal with. I think that has now come out very, very clearly. These changes that are happening to us as a society on the outside are so diverse and so varied and so deeply destructive, as Tim has just said again in the roots, that we simply have to deal with them. We have no other choice. The alternative, as you say, would be to move to an island or dig ourselves into the woods or something and disconnect from it. But that can't be the solution. So it doesn't work either. I wouldn't say it's no longer possible either. So that's over too. There is no longer an island.
[46:19] All occupied. That is precisely the point that we are perhaps dealing with. Perhaps it is also our task in a way to embed the whole thing in a larger context in these perhaps small conflicts that we are called upon to deal with. Perhaps to outline the context again and to show those involved in such a discourse, in a contentious situation, where they are and what is happening around them and what points of contact and connections there are. I think that can be a good task for us again, to actually bring this categorisation with us and to help us to perhaps get out of a rather small, narrow perspective and to integrate the whole thing into a larger context.
[47:01] Yes, I notice that in mediation, but also in other counselling settings, that the contractors involved often have the fantasy that there is an expert there who no longer has that. So he doesn't have the conflicts or he doesn't know about them. So that's also a self-image and there's also a colouring of identity that I don't think mediators can easily avoid. And sometimes it really is because we have distance from the conflict that is presented to us in mediation, we are simply more relaxed and further away and can make sure that we don't get involved in the conflict issue. So we do supervision, we do therapy, so that we don't bring certain issues so close to us, so that they don't become so dangerous for us when they get close. That's also a common way of thinking for counsellors and therapists. When it comes to new technologies, I got the impression today that this is no longer possible. Everyone at the table knows that and it's good if the mediator realises that.
[48:01] You're not just a learner, which is sometimes said patronisingly and so on, and I also learn from my clients. But no, we're not a step ahead there. We're not a page ahead in the book. We're just as caught up in the dilemma and don't need to pretend to anyone that we know the next step. That is true. It's a search and negotiation process. And we are right in the middle of it. These technologies have come to stay. They are not going away. Development has not turned back at this point. I remember, perhaps you've also seen it, there was once a memorandum or a moratorium that called for an AI pause of several months, also called for by very well-known, high-ranking people. Those who are working on it the fastest. Yes, well, you can argue or surmise from which direction and with which motives such a thing was demanded. But even that wouldn't help at this point. As it has just been said, one of the eight billion people will somehow push it forward, will break the rules, so to speak, in inverted commas. And it will happen at that point. In this respect, we have no other chance than to actively deal with it and then decide for ourselves how we want to deal with it.
[49:05] What role should it play in my field? What role can it play? But we will more or less come into contact with it more and more when we are called upon to help others in the area of conflicts and disputes. Yes, and I want to focus on this point again, what the task can be. And the challenge for us mediators is not just because we want to be called mediators or see ourselves as such. The question would then be, what difference does our work make when we are faced with exactly the same dilemma? And it's not that I've experienced this myself, I've also had conflicts like this and dealt with them. So what we can often read on websites, that was the point for me, to then develop myself into a mediator or a coach, my own turning point and then the client can develop confidence in it, ah, he's been through it too. Ah, he also went through puberty once, good to know. Or he also failed with his company, good to know. He has credibility in it. That falls away. And then the question is, what is our value? So why should we be there at all? To reach a fundamental understanding about our performance on the one hand, but also about our self-image and the value of our mediation work on the other. So that really seems to me to be another point where the game looks different.
[50:22] Being raised is another game. I think this is also a topic that we often encounter in the field of marketing, where the question is always how do we actually position ourselves? How do we approach the market? How do we actually want to be seen and perceived? And it is often said that it is difficult if you somehow present yourself as a conflict mediator. That has a very negative framing. You're always called in when it's no longer possible, when it escalates, etc. And I think that now gives us an opportunity to turn the whole thing into something positive and to formulate where we can help and where we can provide support in order to steer the whole thing in a positive direction. However, the fact that we may still be doing the same work in terms of content is not the main focus, but rather the question of how we see ourselves as mediators in the broadest sense and how we want to be perceived, when we should be called upon, what role we want to play in the change that is currently taking place. Frank, yes, I'm coming to a point where I'm thinking of my colleagues, the perplexed counsellors, Günther Mohr, Rolf Baling and myself, and that's the point for me, maybe we need to talk to the four of us about counselling.
[51:25] Conflict counselling as a value. Exciting topic, we can discuss it in more depth. On that note, have a good time. It was fun. Good time. See you soon. For now, I'd like to thank you for being here again and say goodbye with best wishes. See you next time. Have a good time. I am Sascha Weigel, your host from INKOVEMA, the Institute for Conflict and Negotiation Management in Leipzig and partner for professional mediation coaching and organisational consulting training.
Leave A Comment