Agile Development is Too Slow for the World of AI
It’s time to yank down our software from deterministic decision trees and expand our impact. “Good enough” design and functionality based on Agile Development methodology isn’t good enough anymore. We have software capable of learning, adapting, and coming up with the best design and experience for each user. So why aren’t we letting it do so in our everyday lives?
Partly because Artificial Intelligence and Machine Learning (AI/ML) is viewed as hard and inaccessible. It’s time to move past the 1999-era Agile development mentality. The fact is that self-evolving software is faster and better at achieving design, content, or functional decision goals than the most crack developer team in a traditional Agile framework. Product teams should be thinking about strategic solutions, while our machines are busy figuring out, learning, and implementing tactical design and UX decisions. We’re the generals, our software the tacticians. If the generals are down on the battlefield trying to address every single interaction, we aren’t spending time thinking about winning the war.
What’s the cost of staying agile, rather than adaptive? Moving too slowly and missing the boat. Missing an edge case that could drive a next-level experience for a new user. Missing a micro-trend that could earn us more money. Missing out on forming a relationship with a new consumer whose online profile hasn’t caught up with her new six-figure job.
Often we’re not even aware of these misses because our technology is working, grinding away at offering preset programmatic solutions. Or we’re aware that we’re missing out on these edge cases but we have more pressing issues. But because leveraging even a single, slight advantage can cascade and culminate into millions of discrete moments that ultimately up-level our entire business or industry we’re actually missing out on incredible potential. The good news is that we don’t have to.
What feels the most foreign to the rigor of technology? Ambiguity. It’s time not only to get comfortable with ambiguity but to enable it – while still setting boundaries – in our software that designs user experiences. Often AI/ML solutions are complicated, cumbersome, and costly. It’s why the majority of risk, compliance, and design teams don’t allow their software to create, test, and implement user experience decisions.
Which is too bad. Because a machine that learns will make a better design, content, or functional decision. So what do we need to do to make that leap of faith, climb down from our decision trees, and allow our software to evolve? We embrace probabilistic over deterministic and allow our machines to learn more to do more.
The technology industry has always been comfortable with evolution. Our processes have evolved from client servers, to cloud, from waterfall to agile, from object-oriented to event-driven – in all cases – using processes that are planned, mapped out, specific: deterministic. Adapting to a new type of development model, a probabilistic one, will allow our software to learn and address all the possibilities inherent in our product and user experiences. In the meantime, we get to dream up ever more impactful use cases for our software to test.
It’s time to climb down from our trees. There are worlds to conquer in screens all around us. It’s time for us, and our machines, to adapt.