Introduction

I used to be a total algorithm nerd, spending all my time getting ready for computer science contests. I remember grueling hours spent mastering data structures, different sorting algorithms (like Radix sort or other obscure, marginally useful ones), dynamic programming or other intricate techniques. But when I transitioned into my professional days, as a web developer, my day to day revolved around building HTML forms and handling databases. All that algorithmic training seemed irrelevant, making me a very very sad panda šŸ¼.

Yes, I was a web 2 dev

Yes, I was a web 2 dev

Then I jumped into blockchain development, and everything changed. Suddenly, those algorithms were essential to solve complex problems. The complexity of blockchain tech meant I had to dust off those old skills and get back into serious competitive programmer mode. It was a real eye-opener that no matter how far tech advances, the core principles of algorithms and thoughtful planning are still super important.

The complexity of computation transforms into more costs for users. Writing a better algorithm reduces these costs and translates into more scalable code. Considering code is pretty much open for everyone too see, people will know what applications are better and optimizing becomes really important when building dapps.

Costs are also important when you realize how much money goes to AWS

Costs are also important when you realize how much money goes to AWS

In recent years people are getting super hyped about these giant AI models, thinking they can just throw any problem at them and get a perfect answer. Thereā€™s a growing tendency to heavily rely on LLMs to handle complex tasks, assuming they can manage everything independently. However, this reliance overlooks the importance of algorithms and strategic planning in enhancing AI performance.

Strategic AI: Lessons from Poker and Go

Noam Brown, a researcher known for his work in AI game-playing strategies, provides a compelling case for the importance of planning. In his journey to develop AI systems capable of defeating top human players in poker, Brown noticed a critical potential improvement. Early versions of his poker AI model acted almost instantly, relying on pretrained models without real-time strategic thinking. By contrast, human players would take their time to contemplate moves, especially in challenging situations.

AIs were acting on instinct, while humans were using instinct, strategy and planning to make their next move.

Realizing this gap, Brown introduced search and planning on top of his AI models. By allowing the AI to ā€œthinkā€ for a few seconds before making a decision in the final round, he observed a staggering improvement. The AIā€™s performance saw a 7x reduction in distance from the Nash equilibrium, effectively simulating the benefits of scaling the model by 100,000 times. This addition of planning transformed the AIā€™s capability, leading it to outperform professional poker players in subsequent competitions.

More search increases convergence towards equilibrium https://arxiv.org/pdf/2304.13138

More search increases convergence towards equilibrium https://arxiv.org/pdf/2304.13138

Similarly, in games like Go and Hanabi, incorporating planning algorithms like Monte Carlo Tree Search (MCTS) significantly enhanced AI performance. In Go, for instance, the raw neural network without MCTS underperformed compared to top human players. With MCTS, however, AI systems achieved superhuman performance without necessitating impractical increases in model size or training data.

Embracing ā€œkilled backtrackingā€

I developed a little trick during my time competing in programming contests. Do you know how backtracking algorithms work? They methodically explore all possible solutions to find the best one. But hereā€™s the catch: going through every single possibility can take ages, especially when time isnā€™t on your side.

In contests, thereā€™s always a time limit for how long your algorithm can run. Sometimes, figuring out the optimal solution for a problem seems impossible, and implementing a dumb backtracking algorithm can still help. However, letting the algorithm run until it exhausts all possibilities is impossible within the time limit. So I came up with a twist on the traditional backtracking method, I call it ā€œkilled backtrackingā€.

This works well with games such as chess, where you decide how long to keep it running

This works well with games such as chess, where you decide how long to keep it running

Instead of letting the backtracking algorithm run its full course (which could be practically infinite in complex problems), I set a timer at the start of execution. The algorithm explores solutions just like normal, but itā€™s constantly checking the clock. As it approaches the time limit, the algorithm gracefully exits, even if it hasnā€™t traversed all possible solutions. It returns the best solution it found up to that point.

Maybe the universe ran backtracking for 13 billion years

Maybe the universe ran backtracking for 13 billion years

The beauty of this approach is that you still get a viable solution within the required time frame. It might not always be the optimal one, but in many cases, itā€™s good enough. Sometimes, you get lucky and it is the best solution.