Joy Clark: Hello everyone, and welcome to another conversation about software engineering. This is Joy Clark, and today on the CaSE Podcast I will be talking Gernot Starke about aim42, a method for systematically improving software.
Joy Clark: Gernot is a fellow at innoQ and is well-known for his experience with software architecture and documentation, so thank you so much for taking the time to let me interview you.
Gernot Starke: Thank you, Joy, for letting me talk about one of my pledge childs, my hobbies, improving software.
Joy Clark: Today we want to talk about aim42. Could you tell me briefly what aim42 is? Or actually not briefly, because we're going to be talking about it for a while.
Gernot Starke: Let's start with the word "aim" - to aim for something has another meaning; aim - architecture improvement method... We are aiming for better software, better legacy systems, actually. So to improve existing systems is the goal that aim42 tries to reach.
Joy Clark: That's great. What does the 42 stand for?
Gernot Starke: I think most technical guys should know about the 42 - it's the answer to all questions of the universe, and everything...
Joy Clark: I was wondering...
Gernot Starke: This very strange British humor of the Hitchhiker's Guide To The Galaxy.
Joy Clark: Nice. I was wondering if it was a reference to that.
Gernot Starke: Several things I did in my past ended with 42, so I reused that suffix in the aim method. There are no three-letter URLs available any longer, so I had to switch to a five-letter URL, and then I used the 42.
Joy Clark: Smart. I'm interested in your journey to aim42. You've had many years of experience developing software and developing systems, so if you could just let me know what your background was, why you decided to develop the method...
Gernot Starke: If I think about my personal past and I assume it's similar with many developers', most of our time - I guess 70%-80% of our time we spend modifying existing systems, and not building systems from scratch. And everything I learned in my formal education at university was just about building new systems, so I learned about 20% of my daily work... Daily work in more than a decade of working with clients, customers and so on.
Gernot Starke: So I started thinking "How can we improve this 70%? How can we educate people?" How can we find a more systematic approach in this not building from scratch, but building on sometimes quite ridiculously bad grounds, where we have to add new functionality or improve performance, or whatever modifications clients need or ask us to do, and we need to do that.
Joy Clark: So aim42 is an approach to systematically improve software; what steps does that entail?
Gernot Starke: I have a thesis that during the last years, iterative approaches found their way into all aspects of software - software development, operations and so on. So one of the basic theses in aim42 is that we need iterative approaches for improvement also; not only for developing new systems, but also iteration in evolution and maintenance of systems.
Gernot Starke: Aim42 consists actually of three phases, which we want to perform in an iterative basis. One observation was that developers tend to correct mistakes when they find them, and the second they find some bad code, they tend to say "I want to improve that."
[00:04:34.23] I think it's often a good idea to step back and just start collecting potential mistakes or potential problems, and later on decide which of these are worse and have to be fixed first. If you spend the whole day refactoring small code smells, and you might have improved the code a little, but you probably missed the actual huge problem in performance. That's why aim proposes those three phases: analyze the problems first, evaluate the ones which are bigger or have more impact, and in the third phase do the actual improvement.
Joy Clark: So in the first step the process is the analysis of the system. What tools are available to help me with analyzing my system?
Gernot Starke: An interesting question, because the first and most important tool is built into every human, it's your ability to communicate with existing stakeholders of a system. Before I start throwing a huge and probably complicated tool at some hundred thousand lines of source code, I'd like to ask the developers about their impression of the system; I ask the operation guys, the administrators of the system what their experience with it is, where they find difficulties, encounter problems and so on. So usually, you should start with what we call a "stakeholder analysis"; have a look what stakeholders are involved, have you access to actual users, you could ask about their impression of the system, whether they know about problems or issues...
Gernot Starke: Ask architects, ask UX designers, and everyone who is potentially involved in developing or evolving the system, and get a collection of problems from these stakeholders. After you did that, you might come up with a tool. That tool might be code analysis or whatever, but the stakeholder analysis is absolutely required for a step.
Joy Clark: So you have the stakeholder analysis - what comes out of the analysis step?
Gernot Starke: In small to medium cases - let's say in the usual case - after the interviews with several stakeholders... Usually, we propose to do that in pairs; you have two interviewers and one or two stakeholders talking about the system and the problems. We more or less informally collect existing problems and issues on sticky notes, whatever (a very low-tech instrument), and start putting together a collection of these sticky notes on a wall.
Gernot Starke: I like these electrostatic sticky notes that you can just move around, making it easy to prioritize these issues, and if several people complain about the same thing, the issue moves upward in that priority list, because it might be more important or unnerving more people, or making more people's life difficult.
Gernot Starke: In this first analysis phase we usually come up with several dozens of these sticky notes on a wall, or on a notepad. Later we identify areas where we want to investigate deeper. As I said, it's an iterative approach. The first iteration is talking to people, and then we decide how to proceed, how to move on.
Gernot Starke: Joy Clark:[00:08:45.27] So the analysis stage - is there a rough time...? How long does it usually take? Does it depend on the system?
Gernot Starke: If you talk about, let's say, a typical developer of a system or a developer/architect of a system, such a discussion takes an hour or more; just talking five minutes is no use. And having a small, medium or large system - you have a dozen of stakeholders that potentially give you valid input. So at least a day or two with just interviews, just talking to people is the average.
Gernot Starke: Even in smaller systems we use for aim42 improvement we had at least two days of interviews with various people, consolidating the notes we get from these interviews, prioritizing the outcome. I won't suggest to try it faster, because these people give us very, very valuable input.
Gernot Starke: Then we usually decide what parts of the code to analyze with some static analysis, which parts of the code to analyze with any profiling tool, and whatever issues come up, and we find means to investigate any further.
Joy Clark: Is that already moving into the next stage, the evaluate stage?
Gernot Starke: Actually, I skipped a part of the analysis phase that's quite typical for developers. If you as a developer describe a problem you have with the system, probably some dependencies you don't like or are making your life harder, you will most likely come up with potential solutions to that problem. "If we cut the dependencies here and move it to there, and rename that class to this one, or move the method from one class to another..." - in aim we call these potential improvements. So we write another type of sticky notes - green sticky notes - with potential solutions to the existing problems.
Gernot Starke: Quite often, developers know about alternatives. So you have one potential problem - we write it on an orange or red sticky note - and you have two options how you could improve that; one option is a quick fix, potentially not fixing it completely. The other one might be more expensive, but it's getting rid of the whole problem.
Gernot Starke: In the first phase in the analysis we note down all of these. So we try to come up with both a collection of issues or problems, and potential solutions, which we then later in the Evaluate can map to each other and find an optimal mix of things to do next.
Joy Clark: Do you also have some brainstorming of more potential solutions at that point?
Gernot Starke: That's a technique you can use as an interviewer - ask people about their problems or perceived problems, and later on ask them about potential solutions. You as an architect or as a developer who wants to improve the system, you will have some ideas how you could do that. Brainstorming is one technology or one method you could use if you have a set of problems, brainstorm about potential solutions, but simply asking questions like "How would you do that as a developer?" or take the problem you heard from one person to another person, ask them "What could you do?" and you find answers... Or at least candidates how you could improve.
Joy Clark: Is there anything else that is included in the Analysis stage?
Gernot Starke: We started with the interviews, but we left out about a dozen other options we have in the analysis. I'm pretty sure all of our listeners know about static code analysis. If you, as a developer, as a stakeholder of the existing system, told me about problem in code, then we most likely will have a deep dive into that specific code you mentioned with some analysis tool.
Gernot Starke: Either you do dependency analysis, complexity analysis, you might do some (we call that) software archeology, look in the history of the code how it was developed, whether the original developers are still on board or have left, and probably you had a lot of different persons working on the same segments of code, meaning several ideas or different concepts moved into that code, making it probably more ugly than it should be... These are other techniques we use in the analysis phase.
Joy Clark: Once we've analyzed the system, the next step is evaluating.
Gernot Starke: Can I go back to analysis and add some further...?
Joy Clark: Of course, I didn't want to move too quickly.
Gernot Starke: We had an interesting experience with a client, where we analyzed the source code and found pretty much clean code everywhere we looked. So looking at the code was a pleasure, but the performance of the system was so lousy... On one hand you have that excellently-written source code, and on the other hand you have that lousy performance, and end users were very, very unhappy.
Gernot Starke: So you have some areas where you should look beyond code, for example have a look at your database structures, not just "We are using Oracle plus Hibernate", but please look into the table structure and the foreign key dependencies you have. That client with the good code and lousy performance - they had four or five database tables with about 500 columns each. 500. Each. Which is absolutely incredible. I've never seen it before and I don't want ever to see it again, but if you only look in code, it's very difficult to see such things.
Joy Clark: Can you not see that in the code? I would think at some point in time they would have to...
Gernot Starke: In dependency analysis, or if you look for good code - good names, short methods - you don't find 500 columns. If it's well-hidden behind clean code, you will not find it ever just by looking at the code.
Gernot Starke: Another area of investigation is the issue tracker. You can have excellently written code, containing lots of errors - business errors, for example. I can write very clean code and just do a wrong calculation. That's quite simple. So if you look at the issue tracker and find "Oh, there are clusters of errors in certain areas of the system, or in certain components or building blocks", that's an indicator that you have to probably change the perspective on how you look at the code.
Gernot Starke: So you have to look at data, data structures. For example, you have to look at what is remotely executed and what is locally executed, because that's sometimes difficult to see in the code. If it's Java code, it can be executed on any of several machines. So if you don't look at the deployment or deployment concepts, it might be difficult to find out about certain problems. That's why in the aim Analysis phase we talk about breadth-first search. We don't want to deep dive just into one area, for example in code dependencies, but we want to do a breadth-first search, considering data, data structures, deployment, and even development processes.
Gernot Starke: It might be that you have several very clever developers tied or jailed into a very bad development process, so they cannot communicate, because they sit in different office buildings and work in different timezones. I think everybody knows that term "breadth-first", and in the aim Analysis phase we propose several topics that could be included in that breadth of analysis aspect.
Joy Clark: Have you ever had the experience that you accidentally missed something? You tried all these breadth-first search, tried lots of different things and later you realized "We didn't even look at that..."?
Gernot Starke: I'm quite sure we often miss things. You never know if you missed a bug in code, unless it manifests itself. Getting feedback from such analysis years later is a good idea or is a good opportunity to reflect your own systematic approaches. When I started maintaining systems, I just looked at code. Then I learned about problems we had in databases, or database connections, which were not that simply visible in code. So I enhance my own toolset over the years. That's what's currently written in the Analysis phase, and aim is the superset of several contributors' experiences in finding out problems. I'm pretty sure if we get more contributors, we will find more or additional tiny aspects we have yet overlooked.
Joy Clark: But there's room in the toolbox. You can always add or take away analysis tools as necessary.
Gernot Starke: It's completely open source, and as every open source project, if we get contributors with new ideas, we will welcome that. And if people tell us about experiences... Actually, we have too few contributors from embedded systems. Embedded systems and the combination of hardware/software - that might be completely different types of problems. I have not encountered it in the typical business systems I have analyzed in the last years.
Joy Clark: So if anyone out there is doing embedded systems, then feedback would be great. After the analysis stage...? We've performed our analysis.
Gernot Starke: I hope you still have in your mind that mental image of these dozens of sticky notes attached to a wall. Now we have to order them, to find out what is the biggest pain, what is the biggest loss for the business, and not only what is the biggest problem for a single developer. So having a neutral unit of comparison, how we can evaluate - that's why the phase is called Evaluate; there is that notion of value in it, so I want to find out what problem has the biggest business value impact, so what is hindering the business from making more sales, getting a higher price for the software, or whatever.
Gernot Starke: We try to look at these problems from a business viewpoint, which is sometimes a bit developer unfriendly, because developers complain about "Ugh, this is a very bad dependency", but the dependency doesn't have any significant business impact.
Joy Clark: Unless a developer costs a lot of money...
Gernot Starke: But if the developer is never touching this piece of source code, you can just leave in that bad dependency, ignore it and refactor something that has a real business impact, making developers slower, or implying that developers need more time to change certain parts of the software. So we try to find a unit, and that unit usually is money. It's Euro, Dollars, or whatever your currency is.
Gernot Starke: Let me take one step back. Evaluating issues is difficult, and it will take some time. In a first brief iteration we'll have a look at this collection of issues and from our gut feelings we sort them to very difficult or very high impact, very high priority, and lower impact. So we don't evaluate 50 issues, but we take out about five to ten, and have a deeper look at those, try to find out what could be the business impact - is it dozens of Euros, hundreds, or hundred thousands?
Gernot Starke: For example, we had a discussion with a client - that was kind of an embedded system; they were doing a wind energy, these towers with the rotors on it, huge stuff, and we had dozens of potential problems in their software, and when discussing the potential impact, we found out that there was a certain software issue that required the building engineers building these towers from concrete and steel, they need to invest more concrete. So they had to build the towers stronger than they actually needed to be because of a fault in the software. They are talking about hundreds of thousands of kilos of concrete wasted every year due to a potentially tiny issue in the software.
Gernot Starke: That was an interesting moment in that analysis, because these guys told us "We have to stop here and get one of the other engineers to listen to us", because this problem was potentially worth hundreds of thousands of Euros every year.
Joy Clark: That's pretty valuable.
Gernot Starke: Yes, that was an interesting experience, because looking at these software issues, if you just look at the code you cannot imagine what the potential impact of this is, and all of the guys gather together with software engineers having no idea what these -- they had too little memory in their small embedded or small device they built into the tower, so they couldn't include any simulation library in it, and nobody could imagine what that could mean. After discussing a bit, we learned that could make a big, big difference for these building engineers.
Joy Clark: Interesting story. So we have all of our sticky notes on the wall - we're still working with the sticky notes in the Evaluation stage?
Gernot Starke: And you have prioritized. You discuss, again, probably, with various stakeholders what could be the business impact, how much slower will the development team become because of this problem, because of these dependencies or whatever that is. One method or one systematic approach in evaluating is always evaluating intervals. We cannot calculate the business impact, but we can estimate it only... Do an estimation from this lower bound to this potentially higher upper bound. If this interval is quite small, you are quite sure about your estimation; if the interval is large, you are unsure. So it could be just a few dozens of Euros or a few minutes of developer time wasted, but it could be a developer wastes weeks, which is much more expensive, obviously. So these intervals might be quite large, but after evaluating several of these issues, the priorities usually become much clearer.
Joy Clark: So do you write it on the sticky notes then?
Gernot Starke: Several of these issues will definitely stand out and become really, really important. And then again we switch over to the green sticky notes, the potential solutions, and estimate how long the fix would take. If we have a problem, let's say, worth about a thousand Euros per week of wasted developer time, and the potential solution to that problem costs 20 developer days... So we have a 20:1 ratio, and then some management or whatever can decide "Do we want to invest to get rid of the problem? Or shall we just ignore it and care for something else?" That's often a very interesting discussion, if you have a product owner on board, or a product owner plus management on board and just discuss "How shall we proceed?" If they see the collection of problems, they are usually a bit speechless. "We didn't know we had that many problems, and we didn't know that several of these problems are so grave for business. We always wondered why developers are so slow, and that's not because we have bad developers." Usually you have severe issues in your system that are not visible for management or deciders. The Analysis phase of aim42 makes it visible, the Evaluate phase shows how grave these are. Then you usually are in a quite good basis to discuss these next steps, the Improvement phase.
Joy Clark: So you're still working on the sticky notes and you've labeled them with a value they have, and now we're ready to go in the Improvement phase.
Gernot Starke: Yes. If you have a very large system and many stakeholders, this might be the time to switch from sticky notes to any electronic -- I don't say the bad word of the table calculation, if you know what I mean... Make it JIRA issues. So the grave issues - convert them to JIRA issues, and then they become a bit more manageable. Use your kanban board whatever you like; if you have hundreds of them, we have to move from the sticky notes to any other representation.
Joy Clark: But it goes from a sticky note board to a kanban board somewhere online, or something.
Gernot Starke: Yes. Find whatever means appropriate for the team, but don't go for high-tech first, go for low-tech first. Then if the evaluation determines you have some severe issues which are business-relevant, then you might switch tools to be more systematic, to be able to root this problem plus the solutions around, put in a mail thread or a Slack channel or whatever means is appropriate to get several stakeholders together to work on it.
Joy Clark: Okay. Then you go into the Improvement phase, at that point?
Gernot Starke: That would actually be the most interesting phase for us developers - we want to improve the system; we want to make something better, we want to change some bad situation for the better. Our clients usually have to integrate improvements with daily business; they cannot just stop the world, improve the system and reboot the world again. That's not practical. You have to find means to do improvements in small steps, because business is constantly requiring new features. I call that daily business.
Gernot Starke: Within this stream of constantly arriving new features, we have to phase in these improvements. Instead of getting rid of a bad part of the code and replacing it with something new, you have to plan for strategies. How can we slowly move out this bad component and slowly replace it, in small steps, with something better? This improvement requires some planning ahead, because sometimes you really have bad areas in code; they are ugly, their maintenance is very expensive, so making it huge problems, but still you cannot just cut them out and replace them until tomorrow. That won't work, so you make it a replacement project, and there are certain strategies how you could do that.
Gernot Starke: One strategy that's typically required is better modularization. Usually, if you have bad code, very tangled mess of spaghetti code, you have to have an interface against that spaghetti code, so clients usually use the interface instead of directly calling into the spaghetti. And as every developer knows, this is a probably long and difficult process in itself, but it's a prerequisite for further improvement. We have to do some cleanup in modules and interfaces, and so on. Where specifically, it depends on the problems you are going to solve, but Improvement usually consists of a number of small steps in getting rid of the problems.
Joy Clark: I'm just curious when you say interfaces - usually with Java systems, then...? Or have you had experiences with other...?
Gernot Starke: Interfacing against bad parts of code is completely independent of Java.
Joy Clark: My question was more along the lines of I'm personally curious about what languages you've worked with in the aim42 system.
Gernot Starke: Name it, we did it. At innoQ we work with several customers having really polyglot systems. When we first discussed with developers we were having a Java system, and later on we found out "Oh, there's a little Python, and there's some Scala, and there's C++..." We patched the standard software we got from SAP, so there's a bit of ABAP and so on and so forth.
Gernot Starke: Several of our colleagues and myself, we did analysis in systems having seven or eight different languages involved, in significant amounts; not only small scripts, but really significant amounts of different languages. So yes, it's often the case that we have large amounts of Java code or object-oriented code, C# code, which can be analyzed with standard tools like SonarQube whatever, but we find systems that are a mixture of too many stored procedures and too many various databases; still, we have clients using host technology, so we have COBOL code, we have AS/400 systems with various strange languages involved in existing systems.
Gernot Starke: And together with the new mobile stuff that's usually relying on back-end systems... Mobile is developed in various numbers of different languages, therefore we usually have a mixture.
Joy Clark: Okay, cool.
Gernot Starke: No, it's not cool.
Joy Clark: It's not cool? So we should all do everything in the same language.
Gernot Starke: I remember a case when we had a client that had a very, very crucial component written in Haskell, and a few of our colleagues and myself included have never seen any productive Haskell code, so I said "Oh, we need an expert, because I don't know if this Haskell code is well-written or not", and the usual analysis tools couldn't analyze that code... So we had to bring in another person, another colleague that had a look at this code to give us a profound opinion on what was really going on there.
Joy Clark: Okay. We talked about modularization in the Improvement stage. Are there other improvements that aim42 talks about?
Gernot Starke: There is one quite important improvement that worked for a few of our clients quite well. We are actually searching for an appropriate name in aim42... We have a few patrons or practices in aim42 that belong in that area of improvement. This area is reduction; we try to make things smaller
Gernot Starke: One problem the human brain has - we cannot cope with large numbers of information. Many lines of code, many different classes, many dependencies overwhelm us, so we try to reduce. One reduction is split large systems in several smaller systems. Independent of this notion of microservices or self-contained systems; just try to find areas in the system that don't belong there. There is a notion in software engineering called cohesion, so we try to find areas that are incohesive, so they don't belong...
Joy Clark: Could you define cohesion?
Gernot Starke: Things belonging together. If I have a system doing some kind of business calculations and in the same codebase it's generating PDF, that doesn't belong there; it's really something different. So move this PDF code to a different subsystem or to a different system. If you have a system that's working with private customers, and with even more private customers, and a small area of this system is working with organizational customers, completely different types of customers having completely different data types - even the address of a private customer is usually just one, and for a corporate customer it's many.
Gernot Starke: Moving out this corporate customer stuff can simplify the private customer calculations a lot. We did that with the customer having a quite huge system (several million lines of code) and we tried to move out several areas not belonging to the rest. Obviously, we are as always under NDA, so we cannot talk about those specific customers, but imagine a customer is selling goods over the internet - typical e-commerce - and you're selling liquid goods, like olive oil, and in the same shop you are selling washing machines.
Joy Clark: Okay.
Gernot Starke: The customer did that. They sold everything. And we removed the parts for the washing machines (as an example), because the logistics, the price calculations for washing machines is completely different from logistics, transport, warehousing of olive oil. We split this huge monolith into several smaller parts, caring for specific areas of this e-commerce, making the smaller parts much simpler to maintain, reducing turnaround time, hotfix time from weeks to a few hours... Literally, a few hours. That was a great success of this strategic approach of reduce, reduce, reduce. Making it smaller, making it better to handle. So splitting out is a reduction approach. Self-contained systems or microservices is the ultimate reduction approach, but that's a different topic.
Joy Clark: Are there other approaches then? Are there other reduction approaches?
Gernot Starke: Yes, there is one which I personally dislike that's called Big Bang. Many developers are crying for "We need to throw this system away, it's damaged beyond repair; we need a new one", and this "new one" approach, trying to recreate a complex system from scratch (called Big Bang) to my mind is quite dangerous, because even finding out what the old system has to do, or what the old system is doing is difficult. If the old system was simple to understand, it would be simple to maintain and to improve, so it's very optimistic to think that we can just rewrite an existing system and the outcome has no bugs and is better in all aspects. It won't be better.
Gernot Starke: We'll probably put the link to the interesting anti-examples for Big Bangs in the show notes. Joel Spolsky has written about it, a very impressive take on Big Bang. Sometimes I experience Big Bang approaches, and several of those had real people issues. The people doing the new system were newly hired, they lacked the business experience; the people in the existing team were quite envious, because the others were allowed to experiment with new technology. That was just an organizational issue you have there. The existing developers were unhappy, so they started quitting their job, the company lost know-how... You don't ever want to have that. You don't want to make your existing developers unhappy and quitting because they know about details nobody else knows.
Joy Clark: Yes. I'm a developer and I also have that itch when I see a system, like "I could do this better..." I just want to throw it all away and start over, and I think that's a common problem with developers. Is there a way you've used to try to motivate developers to want to work on an existing system instead of wanting to throw everything away?
Gernot Starke: It's a good idea to move bad things out of a system, but not all at once, so try to find the areas that really make your life worse; not every bug or every SonarQube issue has the same business value. Let us developers try to find the areas that really smell worse than others, have more business impact, and then you can throw out small parts and replace them by better. If we plan this a bit, if we do this in a systematic way, you can get rid of your itches, I'm quite sure. But developers in general have to accept that there is business, and this business value is important for us to decide where shall we refactor and where shall we leave the code in the state that it is.
Joy Clark: Communication probably plays a big role in that.
Gernot Starke: Yes. Eric Evans, this domain-driven design guy once said "We cannot fix it all." So even if you want to move towards domain-driven in any application, you won't get everything tidied up. Everything won't be bounded contexts, domain aggregates and so on. So concentrate on the more important parts and get those tidied up, cleaned up.
Joy Clark: Have you found that in just the splitting up and the developer happiness is also better?
Gernot Starke: Imagine instead of your reaction time being weeks -- so you get some bug to fix, because the end user had a problem; you fix the bug, it takes you a few days to dive into that specific version or source code because it's so complicated, and then it takes more days to get that bug out in a hotfix release. That's quite a frustrating situation.
Gernot Starke: Now imagine we reduce the complexity, the amount of things you have to care about, and you can get out this bug fix in let's say two days instead of ten days, so a five-fold improvement.
Gernot Starke: I did a retrospective with a client where we did exactly that - we split out some parts, improved cohesion, and the developer happiness skyrocketed. They said for the first time in years it's really fun to work on the system, it's really fun to talk to the end users. They still have complaints, but now we can guarantee you get your fix within a few days instead of next month or the month after. That was a real difference.
Joy Clark: Well, I would be much happier in that situation, as well. Okay, so we've talked about reduction techniques - are there any other reduction techniques that are of note for improvement, or we can go on to other techniques?
Gernot Starke: One thing I really like to do is before I start modifying source code, I like to have a look at the development process itself, because sometimes the issue with code is just a symptom of the underlying problem; the development process is inherently broken. If a business requirement takes half a year to get to a developer, that's a process problem, not a coding problem. My personal hobby over the years has become to identify development process issues. So why is an organization developing in a certain way, or why is it the developers are agile, but the rest of the organization is not agile? We often have process friction between various departments in an organization, and I try to fix those, too.
Gernot Starke: This is completely independent of source code, and it's usually a bit independent of the developers involved, but it involves other kinds of stakeholders. We have clients that are really agile in development, and they have operations that's stuck in the '60s. They are at waterfall, they want written documents, they want real signatures on paper in 2017... Which is unbelievable, but it's still existing in practice. That's an area of improvement where we have the potential of improving in orders of magnitude, not just the few percent we get in other areas. So I like these process improvements a lot.
Gernot Starke: Another completely different area is the technology improvements. So many clients that use technology because they used them since many years. "We are using Java 6 because we used Java 6 since ages." Showing these enterprises new technology options often requires organizational change. "This technology portfolio is written in our whatever document" or "We have to comply to this constraint." Resolving these constraints, getting rid of these constraints is an interesting approach to improvement, too.
Joy Clark: There's also a mention of cross-cutting concerns in the aim42 documentation...
Gernot Starke: Yes, I mentioned that at planning issue. Imagine you have several problems in your code, which you can identify, and we have several of these problems written on these sticky notes. Then you find all these problems belong to the same (let's call it) Java package. So the improvement would be "Get rid of this package" or "Replace this package by a new version." A single improvement - rewriting this package - would solve several of the existing problems.
Gernot Starke: On the other hand, you often have a single problem requiring various improvement options. We have to buy more memory for the server, and we have to redefine the SLA with the operating center, we have to get another developer on board... So we have an interesting m:n relationship between problems and improvement options. M:n relationships are interesting, because they are not simple.
Joy Clark: No.
Gernot Starke: This cross-cutting stuff in aim42 - apart from this Analyze, Evaluate, Improve, you have to (let's call it) manage these m:n relationships between issues, problems, and the potential solutions on the other hand. As I said before, you have to integrate these improvements into your daily business, and this is some management or planning activity, which is cross-cutting.
Joy Clark: Okay. The logo - I'll try to describe it, because we're on a podcast... The logo is three arrows which point at each other in a cycle, which implies that it's a cycle...
Gernot Starke: [00:51:28.27] An iteration, yes.
Joy Clark: It's an iteration. So after Analysis, Evaluation, Improvement, do you go back to Analysis, or are you usually done at that point?
Gernot Starke: When we as consultants come to a client, we are often done after the first phase, and then the client can continue on their own. After you did some improvement, you should actually measure whether the improvement really solved the problem, and then you are in Analysis again. For example, measuring performance, or measuring the coupling between several components, or analyzing some tests, or whatever. You should really make sure the problem has vanished, as you hoped it would. With some improvement, it might be that you introduced another subtle problem at a completely different area of the system.
Gernot Starke: Now, the performance for one use case or one feature has gotten a lot better, but the load in a certain cluster or note in the database has gotten higher, making another use case a bit slow, or whatever. So analyzing the consequences of what happened in the last improvement should be an ongoing activity.
Gernot Starke: You go see your doctor once every year, have a check-up; I think doing a check-up in a huge system once in a while is a good idea. That's the basic idea behind this iteration in aim42. So you're not improving a system in one big improvement, but in a series of steps.
Joy Clark: So do you use the same measurement in your -- analyzing if it actually improved the system, do you also use the business value? Have you saved money? Or do you use other metrics to measure improvement?
Gernot Starke: I propose organizations do that, and have some metrics about their systems. One improvement could be introduce some automatic means of gathering data about your system. People often know about how many sales or how much money was turned around by the system over time, but having more detailed metrics about the system - how many hours did developers spend in a certain area is a measurement that's interesting. If you earn a lot of money with one part of the system, but developers spend most of their time in another part, that's a bad idea. Developers should spend time where we earn money, at areas that are completely non-interesting for the business, and making sure these numbers still match over time requires some analysis.
Joy Clark: A general question about aim42 - when is it a good time to use it? Do you only use it on legacy systems, or can you use it when developing new systems?
Gernot Starke: There are people defining a legacy system...
Joy Clark: That's always a difficult definition as well.
Gernot Starke: Yes. Let's say in any significantly complicated existing system, and after a number of developers worked together for half a year, every system is significantly complicated. It's a good idea to analyze "Do we have any problems?" or "What kind of problems do we have?" So applying any improvements can make every system better, not only old systems. I suggest that more developers, or developers or architects learn about systematic improvements, because it can be done with low effort, accompanying the standard development activities we have. So use it whenever your system is in production and you have to maintain it. It's a very open approach, so you are not tied to any specific tools. Aim42 is a collection of currently about 90, close to 100 practices. Every single one of them can be applied in itself, having a certain value in your development or improvement of any system.
Joy Clark: Are there any resources off the top of your head that you can give me for learning how to do systematic improvements? We'll also link to the aim42 website in the show notes.
Gernot Starke: Actually, we started about two or three years ago to collect existing good practices. I don't like the term "best practice", although it is fairly common in the industry. We try to collect practices where we had good experiences with. We try to describe some recipes, how you could apply these practices or patterns - organizational patterns, more or less - and several committers in aim42 provided additional patterns. So it's more or less a collection, and no algorithm or really strict sequence of steps you have to perform.
Gernot Starke: We ordered these patterns and practices in the phases we discussed before, but it's a very loose collection of ideas. If you as our listener like, have a look over the table of contents, and I'm pretty sure you will recognize a few of those, because we just reused a lot from literature (all with quotes and sources). We didn't invent any new wheels. We took good practices and applied them to improvement.
Joy Clark: So aim42 is open source - the last question I have for you is how someone can help and contribute if they would like to?
Gernot Starke: A few of our contributors were working in system maintenance, and they came back with ideas like "We did a certain kind of data migration. We did it in these following steps", and they sent a pull request and we included their ideas in this method guide we write; it's more or less an open book we write... Yes, there are many personal experiences from contributors within this collection, so everybody is welcome.
Gernot Starke: We are very happy if people point out bugs or issues, point out that certain method approaches are not working in certain contexts. It might be that something that worked for me won't work for you, because you are working in Clojure and I work in Java.
Gernot Starke: It's really open, and many contributors are actually from innoQ, because I advertise that at innoQ a lot, but there are people from outside, and everybody's invited. It's hosted at GitHub, actually.
Joy Clark: We'll put a link in the show notes to that as well. Thank you so much for taking the time to answer all my questions.
Gernot Starke: Thank you for asking me great questions.
Joy Clark: I do my best. To all of our listeners, thank you for listening. Until next time.