Note about this chapter:
The content in orange is recently added and haven’t got integrated into the rest of the book, yet.
By learning how to code properly, you only prepared to begin the war for great software development. When you are in a lead position, the responsibility of making the work process optimal is on you. When you have a say in deciding the technical direction of a project, you should know how your choice will affect the performance of development and the organization as a whole. You need to learn how does a workflow ensure high quality results and how to schedule tasks in a way that delivers predictably and sustainably. All while keeping things efficient, effective and well utilized. For that you need to understand what makes any business process optimal both in general and specifically in the case of software development.
- The main ideas from two process improvement systems, Lean and Six Sigma specifically adapted for the technical aspects of software development.
- The relationship between our everyday tasks and the theoretical ideas we discussed so far.
- What makes for an effective development process.
- How to measure the performance of the work.
First, I want to connect this topic to the Financial API. Optimizing the organization can affect 3 out of the 4 elements. It's most obviously influencing productivity and utilization, but as a secondary effect, it can impact the customer experience by the corollary improvements to product- delivery and quality. Everything we will discuss here ultimately leads to these 3 effects and to their related business values.
I want to emphasize that your workplace doesn't have to be a major enterprise for you to benefit from this chapter. Many of these ideas apply to companies of any size, yes even to solo developers.
With that in mind, I want to continue with the 2 main goals of any process optimization, these will guide us in identifying the positive effects of a tech change from the organizational perspective.
One of the most important goals of organizational process optimization is to reduce variation across all variables of delivery. That way the processes can be kept under control. If this doesn't make much sense, that's normal! In simple terms, it means ensuring that the strategy realization, product & service quality, speed of work and the costs are predictable, even when out of ordinary events happen. If that is achieved and the company successfully identified the real customer needs, then the organization can consistently produce the desired results. That is the value of gaining control. Using a developer analogy the goal is to make the delivery process deterministic. That's usually a beneficial property for us, turns out, the same is true for the business. There is so much more to Six Sigma and Lean than what I'm going to touch here. My goal is not to make you an expert in organization and process optimization. I just want to show, how you can optimize your work from the organizational perspective and how to identify the effects of any technology on it.
How can we apply this idea to our work? You need to identify what are the variables in your software delivery process. Then the next step is to find ways to make them perform predictably. Anything that supports that is an improvement if it doesn't impede other values in a net negative way. Following the SDLC overview of Chapter 2, it's easy to find the basic outline. Our job starts with the requirements and designs coming in. Here is a list of the most common factors at each following stage, but you need to know this is not a comprehensive list. Depending on your situation some other factors might be at play, make sure to thoroughly asses all aspects of delivery at your job.
In order to know if you are in control, it's necessary to set up and track metrics concerning all variables. That is nearly a separate profession in itself so I'm not attempting to fully cover them, instead I will give some common examples and generic categories, that way you will have a general sense of how to approach the tracking of each area.
Before examining the phases of SDLC, I want to discuss the sources of variation they all have in common, because I found many overlapping concerns. It will be easier to handle them once. They are the:
- Human factor: The skills, experience, motivation and persistence of the employees working on the given task is hugely influential in the achievable level of process consistency.
- Complexity: When the business domain is large, and the offered functionality is diverse, it's harder to handle all tasks with predictable quality.
- Collaboration: The number of third parties to collaborate with during the execution of the task. Because they are outside of the company's control, they lower the general level of influence over the consistency of processes.
- Communication: The amount of communication that needs to happen. The number and size of involved teams is a major factor in this, The quality of the in-use personal and team discussion apps, forums or wikis are also a factor in its effectiveness.
- Dependencies: The number and quality of the connected systems and used tools developed and/or maintained by external parties also make it harder to achieve control over the delivery process of the whole system.
- Tech: The reliability of the hardware, software and cloud infrastructure used for getting the work done is also a factor in the stability of the processes.
- Tracking: project and task management tools and their integration into the workflow, with the goal of tracking work progress. The better you track, the more insight you have, the higher level of control becomes achievable.
- Automation: The degree of automation implemented at the different stages of work.
I will explain how they apply at the different lifecycle phases only when it's not immediately obvious.
The variables are the quality of plans, the time and the resources (mostly people) it takes to provide them. They depend on factors such as:
- Common concerns: The specific cases are: Collaboration, here it can include the persons responsible for the development of the connected systems or parts of the systems. Tech, in this phase it means the tools used for creating the plans, not the technologies used for building the software.
- System architecture: Clear separation of concerns, logical APIs, easy to understand structure, etc... Creating new development plans for such a system is considerably easier and more consistent compared to solutions not adhering to these ideas.
- Documentation: The system is well documented, all information is easily findable to help in understanding the less familiar parts.
Measuring the needed time and people is easy, but the quality of the plan is much more abstract by nature. A few indicators of it are: how many times a rework was necessary to finalize it, the time and resources needed for implementing it, the quality of the resulting software and the degree of meeting the set budget and schedule.
The variables are the quality of the produced code and software, the delivery time and the quantity of the needed resources (mostly people). They depend on the:
- Common concerns: The specific cases are: Collaboration, here it means the other teams in the company or external solution providers. Tech, in this phase stands for the used programming languages, frameworks and libraries besides the tools used for development.
- Development standards: linting and formatting, naming and structuring conventions, language and framework specific rules
- Quality control: Pair programming, code reviews, mentoring, TDD practices.
- Documentation: The system is well documented, all necessary knowledge is easily findable.
- Workflow: integration with version control, testing and deployment tools.
Some general metrics that can be used are code complexity, code coverage and code churn. When working with the SCRUM methodology the standard measures are story points delivered, velocity and sprint burndown. Support related metrics can apply like first response and incident resolution time for fixing production issues. The number of fixed bugs per time period can also be tracked, it's a good idea to weigh them by severity and complexity. Process focused measures can be useful as well that track the progression of tasks between different stages of the development process. Some examples are: Cumulative Flow or Flow Efficiency. Measurement and tracking of non-functional attributes like performance or security also belong here. Other temporary measures can be put in place when necessary like % of finished refactoring/conversion/rewrite in a technology migration period, or metrics for tracking tech debt.
The variables are again the quality, the necessary time and resources. By quality here I mean the ratio of caught vs. uncaught bugs in the tested software, both functional and non-functional ones. We only have a real role here in 4 cases: Dev testing during and after finishing development, this means simply checking the results of your work. Second is writing unit tests. The third is usually not our responsibility but it's still quite common to get us to develop automated end-to-end tests. The last is a partial or complete role in implementing the tracking and measurement of non-functional properties.
- Common concerns: The specific cases are: Collaboration, the testers might also need to collaborate with other Q&A teams both internal or external. Tracking: In the Q&A case it also includes the reporting and documenting of issues and following their status progress.
- Testing procedure: The set of process and rules used by the testers to standardize their workflow and ensure it's quality.
- Environments: Even though it's part of the Tech factors specifically for Q&A this gets a new dimension which is the number of environments where the testing should happen. Of course their stability and availability still applies.
Measures: Number and severity of production issues, meaning uncaught bugs. Same for issues caught before release. Speed of the test process, number of test cases and derivative metrics like test case effectiveness, test case productivity. How many requirements are covered by tests, tracking of bugs in different stages of handling them.
The variables are the usual, quality, time and resources. Their specific meaning in this context is the following. Quality corresponds to the ratio of successful vs unsuccessful operation actions. Meaning releases, updates, maintenance, decommissioning etc. Another related factor is the incident number of the operated systems. If they get down because an error on the side of the operators it's uptime ratio can be seen as a quality indicator of this lifecycle step, but an incident can also be a security issue or performance degradation. The time is measuring the duration of the operations work and the resources here also include people but as it's getting more and more automated, the used computing power, time or service fees can be even more relevant than that. The sources of variability are:
- Common concerns: The specific case is: Collaboration, here it means internal or external maintainers of system parts of other connected systems.
- Integration: The development process can reliably supply working artifacts and their transition between the dev and ops sides is seamless. It includes the communication of special requirements and the coordination between the two sides if needed.
- Monitoring: Both the reliability of the tools and the amount of monitored system parts and attributes can increase the average issue detection and resolution time.
- Protocols: The incident resolution processes are documented, practiced and prepared in advance. If it's implemented that can significantly reduce variability in the recovery process.
Metrics to track operations performance: The time it takes to finish a deployment, the frequency of deployment per a set time interval, the size of changes delivered per deployment called change volume, number of failed deployments, number of outages caused by a successful deployment aka change failure rate. Then the support type metrics like: mean time to detect in other words the time it takes to detect an issue after it started, mean time to recovery that measures the average time of resolving an incident, mean time to failure which is the average time until the next issue happens, availability/uptime, the percentage of SLA met vs unmet. Sometimes whole lifecycle metrics are accounted here, like lead time, that measures the time it takes to deliver a feature from conception to receiving user feedback.
Because automated error detection and developer participation in issue fixing is already discussed at the previous points, the only aspect remaining here is direct customer support. In the perspective of the organizational effects of development this area doesn't concerns us, so I won't go into the details of optimizing the support process.
Let's discuss the second prominent family of optimization strategies. The main goal of this approach is to set up the processes in a way that minimizes the generation of waste. Anything can be considered a waste that negatively effects any of the 3 elements of organizational performance. The Lean method identified 8 waste types. I will sum up their original meaning and explain how they translate to our work. I also wrote a little bit about the possible causes of these issues from the COOP perspective. This is not technology related but I wanted to include this information, because if you are in a lead position you might have to call out these problem. It's possible that you will be the only one who actually understands their true cause.
- Shipping unnecessary or misprioritized features that were not validated and are not valuable to the users
- Fixes delivered, that are not prioritized according to the needs or feedback of the users
- Any kind of development, maintenance or support for products and features that doesn't or hardly bring any benefit to the company.
- Rework needed because of changing requirements or changing tools & technologies.
- Implementing the same thing by different teams because of technology divergence or non-uniformity.
- Maintaining and supporting a lot of older product versions especially when upgrading to the latest versions is an internal, organizational question of the company.
More products are generated then demanded
This can happen as the result of miscommunicating the company goals, the failure to align the done work with those goals or simply setting up the wrong goals. In software development it manifests as:
In the COOP terminology it's a COO, so a CO2 🙂 problem. The lack of a clear strategy (control), or the lack of overseeing it's execution can be the reason of working on the wrong things. The other possibility is poor organization of the tasks. This issue mainly impedes the effectiveness of the company, as we are solving the wrong problems, holding back the delivery of real value.
Tracking usage metrics, getting real customer feedback, reviewing financial performance and development efficiency enable quantifying these issues.
Summary: Unnecessary development work on the product
Connection to tech choices This is the only waste where - despite my best effort - I couldn't find a link to any property of the tech tools.
- Long time passing between making a change in the code and seeing it's result. Examples are lengthy compilation, build or deployment times. When the applications themselves are slow or unnecessarily complex that contributes to this issue.
- Monolithical codebases, where making a single change affects many unrelated parts of the system. This can cause huge delays in rolling out updates. Especially if the work of multiple teams have to be synced up for a single release.
- Low degree of automation in the development, testing and deployment processes. Lacking things like codemods for automatic tech migrations.
- Bottlenecks in development, like poor tooling, lack of capacity to amend pull requests or to help juniors.
- Unstable software or hardware used for either development, testing, deploying or running the applications. It's a waiting type waste because the time spent troubleshooting their issues is unproductive.
- Debugging issues is harder than necessary. Because the low quality or lack of tooling, nonexistent logging and monitoring, lacking documentation etc...
- Immature solution with weak or non-existent ecosystem and community??
- Difficult upgrade processes taking time away from delivering features, because of incompatibilities or breaking changes.
- Inconsistencies between the implementation of the same application concerns across different projects causing company wide improvements hard to roll out and making it difficult taking over other teams' work.
- Reinventing the wheel, or building a custom solution to a common problem when an open source alternative would deliver the same value or buying a product would still have a good ROI.
Time spent unproductively waiting for the next step to begin.
The cause can be organizational, like inefficient communication between the dependent parties, unsynchronized parallel processes that are interdependent at times, redundancies in the production flow, the inefficient scheduling of tasks, lack of resources, overly complicated workflows or the wrong assignment of roles and responsibilities. The other sources of waiting can be the used technologies and the development process:
In COOP terms its
a POO an OOP matter, primarily affecting the production part in terms of utilization and efficiency. Many of them are the result of problems in the overseeing and organizing areas. If we can foresee them, it's our responsibility to either mitigate or to raise attention to them. Where we have direct control, the production phase, the source are technical. The quality of our choices are the driving forces in creating or solving these bottlenecks. Most of these problem only belong to this waste category, when there are alternative solutions that offer the same benefits without the problems mentioned above.
Measuring this area is quite straightforward, tracking the time it takes to pass each phase of delivery and the total or lead time can show where the bottlenecks are. Keeping track of the time while the delivery can't proceed because of software or hardware issue is tricky, but doable an it can help to raise attention to the source of inefficiencies.
Summary: Unnecessary waiting during work
Connection to tech choices - Will the usage of the tech fit into the existing workflow without blocking it? - If not, can you reorganize the processes to eliminate the waiting? - Will other teams become dependent on your work? If yes, how does that affect the overall process? - Will other systems become dependent on yours? If yes, how does that affect the overall process? - Is the tool stable? Will it increase the number of times the system becomes worse or unusable in any way? - Is it easy to find help in troubleshooting the issues of it's usage? That can mean documentation, online communities, the knowledge of coworkers or available developer tools. - Does it affect the testing of the product? If yes is it easy to automate it? - Will you be able to quickly get approval to use it? - Is there enough workforce capacity to effectively integrate it? - How's the learning curve? Does it take long at the initial phase to get productive? - If yes is it offset by other benefits? - Will it influence the speed of seeing the effects of code changes? - Does it's usage increase the coupling of the classes / modules / services / codebases?
- Compilation: Some tools require their own transformation steps to integrate with the rest of the software in use. It can mean compilation, transpilation, minification, conversion, packaging, etc.. The opposite is true as well, many times there are tools that don't require these to solve the same problems, which means moving code through those steps are actually wasteful.
- Data handling: When doing unnecessary movement of data we are generating a transport type waste. Such cases can include: a too heavy backup strategy, migrating between different technologies in vain, an overly complex data processing pipeline. These can be the results of working with legacy systems, or changing requirements, but the final result is a system that could be doing the same job using less resources.
- Tech migration: When modernizing a solution, or adjusting it to a new situation we often enter a period of transitioning between the usage of the old and new technologies. Depending on their nature It might be a complex, blocking process, that in the end requires a total rewrite of the software, or instead, we can use incremental adoption, going in small steps and continuing feature delivery while working on the migration. So in a sense we avoid putting the old codebase through an unnecessarily complex transportation process.
- Version control workflow: For smaller projects a complex branching strategy might simply mean the movement of code through a process without real benefits.
- Deployment process: It should be aligned with the real needs of the project, sometimes the benefits of a complex automated pipeline simply won't pay off. When a simple solution suffices don't go for an overkill.
- Knowledge transfer: When there's a role or responsibility change oftentimes it creates the need to pass on the knowledge of the person who is leaving the post or task to the next one. Somebody has to really mess up for this to go wrong, but if either the knowledge or the receiver is not what it's supposed to be, then the effort is really wasted. This most likely would only happen if the process starts earlier than the final decision are made about the change.
Unnecessary transportation of products or materials
The cause of this waste is usually a non optimal choice, similarly to the waiting case, when there are better alternatives that make the complexity of the used solutions wasteful in comparison. The correlations between the classic meaning and software development are a bit more far fetched here than with the other waste types but they are valid concerns. What I found to resemble transportation are:
In the classic case, the existence of transport issues would show that, there are problems in the "organize" aspect. However for software development I found it is purely related to the "produce" phase, which means it's totally in our hands and by using the better alternatives we can eliminate them.
Measuring these factors are quite tricky. Similarly to waiting, the detailed tracking of the status flow and the completion time of the different task can reveal insights about the above issues. Monitoring the performance of the applications can show if there are problems with data handling.
Summary: Unnecessary steps in the development processes
Connection to tech choices - Will it add new steps to or simplify the development workflow? - Will it add new steps to or simplify our data processing? - Will it add new steps to or simplify our deployment process? - Will it require migrating data from our current storage solutions? - If yes, how costly that is? - Does it require the rewrite of existing code? - If yes, how much of it? Can it be done incrementally?
- Premature optimization of any software property like: performance, availability, resource usage, security, UX or any other unjustified capability
- Overengineered solutions
- Code added just in case it might be useful in the future
- Creating a framework to build the product when the problem's complexity or the available solutions doesn't justify it
More work is done or higher quality is produced then required
Just as with overproducing this can happen as the result of miscommunicating the company goals, or of the failure to align the done work with those goals. It translates to:
Similarly to overproducing in COOP terms, it's an OO problem. At least the strategy is clear this time. The issues are with organizing the work or with overseeing it's execution. It can mainly impede effectiveness as we are solving the wrong problems, holding back the delivery of real value.
I don't know about any good measure to track some of these issues explicitly. The premature optimization concerns can be verified by user research that explores whether there's a real need for those properties or not. For the rest, the closest we can get to useful measurement is again the time and task progress tracking. It requires a lot of experience in delivering projects to realize just from those number that something is off in this regard. Nonetheless when investigating the reason for a slow development process these issues might show up.
Summary: Unnecessary improvement of the code
Connection to tech choices - Are the main strengths of the tool really useful for the project? If not, then using it will result in overdelivering in those areas. You don't need the world's fastest solution for every problem. - Will it inevitably add or include functionality that we don't need? - Does the tool encourages creating simple solutions or will it lead to bloated code?
- Already developed code that nobody uses, because there never was a real use case for them. It can be a consequence of overdelivering. It's a very minor waste, it only becomes relevant when a developer has to spend effort in vain to understand why does that code exists and if it's useful for anything just to find it's not. This gets worse if there are lots of cases, especially if the codebase has low findability or readability.
- Unused capabilities of the hardware, software, framework, library or any tool, especially if we paid for acquiring them. It becomes a real waste when the job could be done more efficiently using them but they are not utilized.
- Employment of developers whose skills and knowledge is no longer relevant to the company.
Unused products or material taking up storage space- and cost
It's classically a logistics problem but in case of development it's cause is poor planning and/or the lack of oversight from the side of development or business.
In terms of COOP, it's a P concern. The wasted developer capacity sets back the efficiency of work, and the unused capabilities impede utilization.
Many technology specific tools can help with identifying and removing unused code so they can be used to track that issue. I don't know about any good measure of unused capabilities. Tracking employee utilization can show who is doing useful work and who isn't.
Summary: Wrongly or non utilized resources
Connection to tech choices There's no real factor to consider from this perspective. To prevent the generation of this waste, the mitigation should happen when evaluating the effects of the tool on waiting, transport and overdelivering.
- The ineffective reorganization of teams or management.
- Unnecessary training of employees.
- Forming internal communities or starting initiatives that are not aligned with the company strategy.
- Ineffective employee activities. Anything that is mainly about person to person interaction like: meetings, decision processes or supporting colleagues.
- Switching people between tasks before completion.
Unnecessary movement of people
This is the same for software development as for any other kind of intellectual work. All the issues here create expenses without a return.
It's mostly an O issue, where O stands for the "organize" aspect. A less likely source can be the creation of a wrong strategy by the leadership, or the failure to align with the real goals, from the side of management. So the full range of possibilities is CO2.
Measuring and tracking purely organizational efficiency is a topic I consider too far related to cover here.
Summary: Unnecessary spending on employees
Connection to tech choices It's an area without a direct link to technology, but this waste can appear as a consequence of a misaligned tech decisions. When a non-ideal solution is adopted it might lead to the need of employee training, or the forming of communities around it, that further adds to the costs of the choice. In more complicated situations, the adoption of a technology by a given team might result in reorganizing the setup of the workforce. For example a member from another team with much more experience using the tech might be planned to move into the adopting team. In these cases it's good to consider the personal and team effects of such changes and explore if the adoption can happen at other teams with more human factor or organizational benefits.
- Tech: Not all programming languages, frameworks and libraries are equal in the quality of software we can produce with them or in the effort that's needed to produce the same quality. The likelihood of making mistakes has to be analyzed for each tool and evaluated against all choices. A very good indicator of how good a tool is in this regards is maturity: It means it's well know how to make best use the tech and how to avoid its pitfalls. The quality and amount of available documentation, training and online help should be high. Just as the quality of its codebase and activity level in its community. Of course there are many specific factors depending on what kind of technology we are talking about. Always try to investigate what are the relevant aspects of the tools can increase the number of programmer errors.
- Best practices: Most technologies have well established best practices to prevent creating bugs. Knowing and applying these are a major factor in the number of defects we generate.
- Standards: If the project adheres to a common standard, that can prevent issues caused by inconsistent naming or folder structures, bad readability, findability or hard to follow version history. It also means if the project uses tools to enforce the standards, and their effectiveness in it.
- Quality control: The existence or lack of pair programming, code reviews, mentoring, TDD practices.
- Code complexity: There's only so much a programmer can keep in mind when working with code. The more parts and more connections between them are in the system, the harder it is to keep track of everything and thus the probability of making mistakes increases with their number. Any practice or tool that helps keeping complexity low is beneficial.
- Human factor: The programming and business domain experience of the developers are very impactful on the bug count. Egoistic persons are also more likely to overestimate their capabilities and then make mistakes or compromise quality.
- Maintainability: There are many components of maintainability. Some are specific to the tools in question, but others are generic like the open-closed and single responsibility principles, low coupling, low tech debt, good amount of documentation and comments. The points discussed at the Standards section are also relevant here. When a codebase is maintainable it helps to onboard new colleagues or to hand over the work to another team with minimal extra risk of them creating new issues. In the case of third party code, it helps speeding up adaption and increases efficiency while working with it.
Creation of faulty or low quality products or services
This is the most familiar type of waste to developers but it's not always our fault if the results are defective or low quality. A solution can end up being wrong with a perfect implementation if the requirements or designs were poorly made. When the requirements change frequently, it's hard to create a decent solution. The lack of proper testing is not to blame for the existence of issues but contributes to the delivery of them. Even when the software is well tested and bug-free (never), if the deployment process creates outages or the environment where it runs is unstable it will still make a bad impression of the product as a whole. The time it takes to release a new version can also influence the users' impression of the quality. That speed can be impeded by bad organizational structures and processes, which are sometimes the sole responsibility of management. If the company doesn't listen to feedback from its customers, that will also damage the perception of the product. Lastly unrealistic budget and scheduling can also lead to low quality software even when the best development tools and practices are used. It's our responsibility to call attention to these issues when they are not under our control because many times the decision makers aren't aware of the consequences. I hope for the best outcomes to you in such cases. In my experience these are often the hardest problems to remedy. With all that out of the way, let's see how we influence this area.
Our part is strictly the P from COOP, but the other aspects outside of our direct influence are also relevant so all in all it's an OOP matter, affecting mostly efficiency and to a smaller degree effectiveness through the quality control measures.
To quantify the best practices and standards issues we can use technology specific tools. For most mainstream languages and frameworks at least exists to track those statistics. To track code review and TDD practices we can use CI/CD tools. Tracking code statistics like the size and number of modules and services, code complexity, changes in LoC can indicate the total complexity of a system. Many other aspects are best judged by humans at this time, like the experience of developers or the maintainability of code.
Summary: Neglect of developer wisdom and best practices
Connection to tech choices The part about our influence is actually describing the connections, make sure to evaluate the tools against all of them. - What are the factors at play that influence the likelihood of programmer error? - Are there well known best practices to follow? - If yes, are they easy to integrate into our workflow? Can we automate it? - Are there well known standards to use? - If yes, are they easy to integrate into our workflow? Can we automate it? - Will it influence the effectiveness of the quality control we do? - If yes, does it require extra cautions, or can it remove some checks we do? - Will it increase, simplify or won't have any effect on the complexity of the code? - How well experienced are the developer with the tool? - How will it affect the maintainability of our codebase? - How maintainable is the tool's own code?
Not taking full advantage of the available skills and knowledge of the employees.
This is quite straightforward but not really software centric. The general part is to ensure the growth of talent at the workplace, and to assign people to matching positions. Mentoring and training are keys for the first to happen, the second is an HR matter, but as experts we should be able to assess people's knowledge and their match to a certain task. The only technically related matter I identified is the following.
From COOP it's OP again, and for the first time it's really strongly connected to the division of labor aspect (P), as a more efficient and well utilizing setup can be achieved through unification in some cases. The general concerns reflect how well organized the work is.
Measuring this again means tracking organizational effectiveness, which I'm not covering.
Summary: Unused employee potential
Connection to tech choices - Can the tool unify previously separate roles and responsibilities? - Does it require to divide existing roles and responsibilities? - How these changes affect the other process properties?
We can use the knowledge from the previous sections, to create a plan for analyzing an existing development process. If you want to improve your workflow and want to do it professionally the following steps can help you to find the best course of action. Keep in mind, you don't have to be formal and thorough to reap some benefits, so I encourage you to try applying it even if you can't find the time or motivation to do a deep analysis.
- Identify all distinct steps in the creation and delivery of software.
- Identify the control concerns and sources of variation at each step.
- Identify the types of generated waste and what creates them.
- Set up the baseline measurements about the process performance per control concern.
- Set up the baseline measurements about the amount of generated waste.
- Analyze how big is the effect of each issue on all 3 kinds of value.
- Prioritize them by their impact on the most important values.
- Eliminate as much randomness and waste as possible, and measure the impact.
- Refine and repeat.
Two things to notice. First, this can't be comprehensively done by developers only. We need information from our business colleagues, so this is a collaborative endeavor. Second, it's totally not the concern of every developer. I think any somewhat experienced programmer can do steps 1 - 3, 4 & 5 might be more tricky but at least the lead developers should be able to handle those too. 6 & 7 are the steps where the collaboration is indispensable, but it's best to cooperate from the start. The business perspective can be very helpful even at step 4 & 5 to identify the most impactful things to measure.
When I set out to research this topic I didn't expect to find such a big commonality in the cause of inefficiencies. It's clear from examining the list of variables and wastes that we can solve a lot of issues by paying extra attention to the reoccurring forces. So what are these?
I extracted the Common Concerns of Control to it's own section for a good reason. Improving any of those will affect every part of the delivery process which means they have a much bigger impact area than the step specific aspects. When there are no serious bottlenecks at the phase variables I suggest to start handling these first. The number one most important force at play is the human factor. From the C3 list it affects 3 more items, collaboration, communication and tracking. It plays a major role in enabling the generation of many waste types. I found it really interesting, how this reinforces an idea from the Influencing Business Value chapter about employee engagement. Namely that the workforce is one of the foundations of success for any company. It's clearly visible from here that the higher quality people are employed, the better most processes will go. The second very strong factor is complexity. In the C3 category, It's the source of the need for collaboration and dependencies. It's also a major contributor to the generation of overproducing, waiting, transport and defect wastes and is at least a partial reason of many other types. Whenever possible, reduce the complexity of the business domain, the organizational structure and workflows to improve overall efficiency.
I'm sure you noticed, none of these are essentially technological issues. We have to face it. In the organizational aspect of our life, we are not the protagonists. Technology is important here, it can become a bottleneck, but it's far from the most impactful factor in optimizing an organization. However if we neglect our influence over this area, we can ruin the business outcomes.
A summary about organization optimization: It will affect 3 parts of the Financial API, primarily productivity and utilization but through them also the customer experience. All you are going to learn here will change these in the end. We examine some of the main ideas of two process improvement methodologies Six Sigma and Lean, to learn how technology can influence the efficiency of processes. Six Sigma aims at making the life of the organization as deterministic as possible that way giving it greater control over the results of work. We take a look on all the steps of the software development workflow and analyze the sources of randomness and variation at each stage. This follows the standard SDLC model laid out in chapter 2, but I extracted 8 common concerns shared by all phases, they are the: human factor, complexity, collaboration, dependencies, communication, tech, tracking and automation. Lean is concerned about reducing the waste generated by the processes. That can mean for example, the unproductive usage of any resource like development time, budget or skills but it has a much wider perspective than that. The method identified 8 types of waste, and I explain how each of them can be translate to software development, they are: overproducing, overdelivering, waiting, transportation, motion, defects, inventory and underutilization. For every point in both methods - where it's relevant - I give examples of how that area can be measured for evaluating the outcomes of improvement attempts. Equipped with all this knowledge we finish by defining a 9 step process to improve the efficiency of the software development workflow and summarizing how 2 of the common concerns of control has the greatest impact on the overall efficiency of the organization, namely the human factor and complexity.