I’ve often asked myself why I have such a deep passion for mechanical watches and fountain pens. Is it just a question of status?
The answer is no.
I like them because they are, in their own way, eternal. Unlike many of the objects we use everyday, they don’t become obsolete. A watchmaker can repair a mechanical watch an infinite number of times. A pen can be restored, refilled, polished, and passed on. It is a piece of heritage that lives on- from one generation to the next. It is a true triumph of engineering.
If you take a look at slogans used by watch manufacturers over the years, they echo the same sentiment- one of immortality. You must recollect “As long as there are men.” Or even, the one that talks of never actually owning a watch, but merely looking after it for the next generation. These ideas resonate deeply with me.
We, as humans, are not eternal. But we long for certain objects – those that bring us joy, meaning, or identity – to stay with us until the very end of our days.
For me, the fact that I can always repair a mechanical watch or a fountain pen provides a unique sense of reassurance. It reminds me that while everything else moves at the speed of digital, some things remain timeless. And that engineering can really make it last.
And maybe that’s why these objects bring not just functionality, but joy. They are living proof that time and craftsmanship can resist the logic of disposability. Passion is at the very core of these creations, which is also why a new version doesn’t release every few months. It is built to last.
Isn’t it fascinating how our brain works? Is it that we attach emotions, reassurance, and even hope to objects that outlast us? Maybe it is our mortality craving to be outlived by something the world will remember us by. A true test of skill- and a commitment to innovation. After all, the best technologies don’t get replaced. They get repaired, refined, and reimagined.
I really like the image that supports this blog, showing the progression of Human-Machine Interaction using the visual analogy of human evolution. This isn’t meant to be an immodest boast. It cannot be. This image isn’t my achievement. That laurel belongs to Generative AI, and it took all of 30 seconds to create it. Today, most creative expressions require just a strong foundational thought and the right prompts – a far cry from three decades ago when MS Paint, which intricately filled in individual pixels, or even as recently as five years ago where talented and trained graphic designers worked with specialist graphics editor software to create images.
My point is that technology has gotten so smart that it takes a few human inputs, stated in natural language, for the machine to understand exactly what you want and deliver an accurate representation of it, whether in words, pictures or even complex software algorithms. The interesting part of this is the inversely proportional relationship between the smartness of a machine and the amount of human effort required to get output from it. Modern aircrafts run on autopilot, whereas it took a human managing a bewilderingly intricate set of wires and levers to fly the original aircrafts like the Kitty Hawk. The modern day rail’s locomotive pilot presses buttons to control trains, whereas James Watt’s steam engine required them to continuously break their backs feeding coal into the engines and work in high temperatures.
I remember once hearing an interesting definition of a “machine” as something that is designed to reduce human effort. The relationship between a human and a machine is therefore one of input provision and resultant action respectively. This is where the inversely proportional relationship between the two (as I have mentioned above) intrigues me. The evolution of the machine, in this particular context, is comparable to how every human being evolves. As babies, we require a lot of input to get even the simplest output, whether in speech or in action. As we grow, our reactions to stimuli start getting increasingly sophisticated and faster and it takes lesser input to result in actions from us. Machines have evolved in a very similar fashion over time.
In computing, we have come a long way from the early days of human input through simple devices such as punch cards and switches or even the keyboard and mouse,to the modern, sophisticated methods such as voice-to-text of today. The finger has replaced the keyboard or mouse in several modern machines such as smartphones. The Graphic User Interface concept has become ultra-smart too and the need for these traditional input systems has reduced dramatically in modern GUIs.
What the mouse did for GUI navigation through Douglas Engelbart’s invention of it in the 1960s, with intuitive interactions such as hypertext linking, document editing and contextual help, conversational systems like Alexa and Google Assistant are doing today for touchscreens and voice interfaces.
Modern human-machine interaction is replete with accessibility, context awareness and personalization. It is this transformation in input systems which has paved the way for semantic recognition and advanced contextual computing. In more recent times, this is where AI has been leveraged to interpret what the user wants – beyond just simple commands. The move into the machine working on the intent of what the human wants is already here, with advanced natural language processing, and multimodal memory retrieval using text, voice, and visual cues. With the help of AI-driven contextual search and memory recall, we are moving towards a precision-first age of user engagement.
If we are already here, what’s next?
Think Black Mirror, but in a more positive way. The near future is all about brain-computer interfaces. Neuralink and similar efforts now represent the frontier of direct neural interaction – where thoughts can become machine commands, and unlock new forms of accessibility and augmentation. There are prototypes that feature high-density brain implants like the N1 sensor that control devices directly from neural signals. Think it, have it. And this is the future of seamless, intuitive and context-rich human and computer interaction.
From humans having to learn the language of the machine, to machines now learning the language of humans, we have made tremendous advancements in technology. And to think that all it takes to generate something is just a thought – no machine language, no codes.
And if the future is already here, what lies beyond?
When we mention “robotics” today, let’s not immediately picture those complex images that you’d find on a stock photos website. Instead, I’d like you to take into consideration a device that you’re likely to have at home – the genius that is the little robotic vacuum cleaner. That little disc-shaped wizard glides across your floor on its own, doing its chores without a complaint in the world. It has evolved tremendously from its inception in 1996, and from bumping into obstacles all along its path to learn its route and struggling to climb over mildly uneven surfaces, robot vacuums have come a long way. They now understand maps, obstacles and blocks, and work around them independently without one having to pick them up and place them back like an unstable toddler that’s just learning to walk.
All this is mainly because robots nowadays are equipped with analytical models that need to plan for uncertainty. They have the ability to think of countless scenarios and work out how to overcome them. When it comes to preparing for conceivable scenarios, it’s impossible to be fully prepared. But empowering these robots with an understanding of what is important, and how to prioritize helps them learn and make decisions as they go.
And this is what designing for uncertainty is all about.
At ABB, we have Autonomous Mobile Robots (AMRs) which are designed to move and navigate independently in a given space using sensors and AI. These transport robots move loads autonomously in various industries – from automotive to logistics to consumer goods and other industrial processes. Earlier, they would follow a path they’d been taught to go on, and any reconfiguration of the path meant a reconfiguration of the robot as well. But it’s been learned over the years that change is inevitable, and in today’s tech world, uncertainty is built into the robot, giving it a decision making power.
What is uncertainty? It basically can be anything from an internal or external source that can break the fixed pattern of what the robot has learned. From changes in its usual environment to unplanned events, uncertainty is anything that causes the robot to evaluate factors that have come in its usual set pattern of decision making. It learns, evaluates its solution, and refines its abilities through this trial and error system. Remember our BASIC computer commands of IF, THEN, ELSE? It’s almost the same, but accentuated by the advances in technology like AI and Visual Slam.
Visual SLAM is a navigation technology that combines AI and 3D vision using off-the-shelf cameras. It allows AMRs to make intelligent decisions based on their surroundings, providing higher accuracy and robustness even in challenging environments. It can help differentiate between fixed navigation references and moving objects and people that aren’t permanently a part of the map. This adds a whole new dimension of flexibility to tackle how uncertain situations can be worked with.
This is the era of resilience in tech – where robots aren’t just about precision, but also adaptability. The more human they become, so does their power to make decisions and find a way around a situation that they hadn’t been programmed for, increase. And I’m not just talking about AMRs and robot vacuums, but everything from self-driving cars to educational, service and medical robots. Planning for uncertainty is surely less straightforward, but an essential step to make the robots of the future more robust, efficient and imaginative.
That Artificial Intelligence has been a major game changer is a self-evident truth. After all, it has inspired a whole new industrial revolution of its own. The fourth industrial revolution (or Industry 4.0) has been built on the back of cutting-edge technologies, a list in which AI finds a major mention. So, in terms of transformative impact, AI is no less than what the steam engine was for its time or what the rise of computing represented in the 1960s.
We are increasingly seeing diverse applications of AI, and its widespread proliferation in almost every aspect of our lives. To the extent that we don’t even include smartphones without an AI capability of some sort in our purchase consideration set now. Every single major technology corporation is offering embedded AI for various business and personal applications. It is almost ubiquitous as a concept and its real-world application today.
However, to date, AI implementations in the enterprise have been largely limited to micro-impact. These implementations have mainly been in the form of chatbots or personal productivity tools. These are useful but, in my opinion, don’t do justice to the power and potential of this game-changing innovation. AI tools, still in the infancy of their application in organizational contexts, are yet to deliver material business impact at scale. Today, they assist individuals but don’t yet transform how work gets done.
All of this is at the cusp of changing. And driving this change will be the rise of AI agents, digital “colleagues” that can perceive, decide, and act autonomously. Unlike chatbots, agents are goal-driven, collaborative, and capable of executing tasks end-to-end, across systems and workflows. The introduction of AI agents is poised to significantly alter a paradigm that has been in play for over 30 years now: that we have been customizing our enterprise applications to improve efficiency. i.e. do more with fewer people.
With AI agents, this concept of reducing manpower to improve efficiency flies out of the window. To enhance efficiency, we can now have more workers, potentially an unlimited number, except that these will be digital coworkers. As this change comes into play, we will need to redefine our understanding and typical methods to calculate “efficiency”.
Today’s formula for this calculation is: Efficiency = Revenues / Human FTEs.
As we introduce AI agents into the mix, the formula changes to: Efficiency = Revenues / (FTE + aFTE) where aFTE = AI Full-Time Equivalent.
This will not be an easy change. It will have its complexity because the transition will not be just about introducing these agents. It will also be about making enterprise applications agent-friendly. Enterprise applications across practices and functions will need to become reliable and autonomous, beyond just being configurable, as they currently are. And this will be a multifaceted process involving technology migration, change management and widespread user acceptance.
This isn’t going to be an overnight transformation, but it’s coming fast. And companies that learn to deploy and manage AI agents at scale will unlock a new era of productivity.
Setting a clear goal for any project is always a great way to start it. The trick, I have learnt through my many years of managing complex projects, is to ensure that the momentum stays high between that first spark, through making real progress, to the final achievement of the goal.
One big risk in every project is that gap between intention and execution. It all starts with excitement: a burst of momentum, maybe even a brand-new app or productivity system. But slowly, things shift. Priorities pile up. Energy dips. Focus begins to blur.
Over the years, I have found a simple method to help build the rhythm as I move from intent to action—not perfectly, but consistently. It’s simple, but it works. And more than that, it’s something I return to whenever the path forward feels cluttered. And this is the concept of SWOT analysis.
It begins, as most things do, with the goal itself. Whether it’s professional or personal, I try to define not just where I want to go, but what success looks like when I get there. The sharper that picture, the more naturally momentum follows. This is the stage then that the SWOT analysis really starts to come in as a gamechanger. What strengths can I lean on? What weaknesses might hold me back? What opportunities could help me move faster? And what threats or risks do I need to account for? It’s a short exercise, but it brings clarity to where I stand and what I’m dealing with.
Off late, I have also started experimenting with generative AI to add more value to SWOT analysis. While the basic principles and foundational ethos remain unchanged, generative AI tools are helping with ideas that are often smart, structured, and surprisingly insightful, once the context is input. I am seeing responses from such tools uncover new angles and sharper sequences; even, identify blind spot I hadn’t considered—and often, that’s all it takes to move forward with more clarity.
After this definition of the context of SWOT, comes the step of visualizing it in an impactful manner. I usually use a simple visual—a Gantt-style layout with rough timelines and key milestones. It helps me see the shape of progress, and it gives me a reference point I can return to each morning to ask: what’s the one thing I need to move today?
And this is where it counts: the follow-through. The daily return to the plan. Not in bursts, not in sprints—but in steady, consistent steps. The real magic often shows up not in the leaps, but in the discipline to keep going—especially on the days when energy is low or the goal feels distant. Those are the days the system earns its keep.
I believe that the simplicity of my method for rhythm building is what makes it powerful. It gives structure without rigidity, and it helps me move even when things around me are shifting. And with GenAI now being part of that rhythm, it has become even easier to start strong—and keep going. Not by outsourcing the hard parts, but by adding a layer of clarity and speed when it’s needed most.
There it is then – my simple formula to go the distance by ensuring that reflection, structure, and momentum all move in the same direction.
As we look at how AI is developing, it’s hard not to notice the parallels to past technologies, like the PC and the Internet. These were once revolutionary technologies that sparked concerns and excitement alike. But will AI follow the same path of adoption and societal integration? If history is any guide, we might already know the answer.
The Early Days of PCs When personal computers first appeared, they weren’t for everyone. People worried that only the wealthy would benefit, leaving others behind. The fear was that this new tech would widen the gap between those who had access and those who didn’t. But, over time, PCs became more affordable, and access spread, especially in schools, closing much of that gap.
The Internet’s Journey The story repeated itself when the Internet became mainstream. Initially, only a handful of developed countries had reliable access, raising concerns about global inequality. But with time, the Internet expanded globally, and initiatives like public Wi-Fi and cheaper devices helped connect more people across the globe.
AI’s Present Concerns Now, with AI, we’re seeing similar fears: Will AI take jobs? Will it amplify bias and inequality? These questions echo the early concerns around PCs and the Internet. But just as we adapted to those technologies, there’s a growing sense that we’ll find ways to integrate AI responsibly too.
Self-Regulation of AI But here’s the interesting part—just like with PCs and the Internet, we’re seeing the industry step up. Competition, industry standards, and societal pressures are pushing AI towards responsible use without the need for heavy regulation. It’s a bit like self-driving cars: innovation is moving faster than laws can keep up, but frameworks are already emerging to ensure it’s done right.
Ethical AI Development We’re also seeing ethical frameworks and standards take shape around AI. Just like we developed rules for privacy and data protection online, there’s a growing movement to ensure AI is built and used responsibly. This will be key in making sure it benefits everyone, not just a select few.
Every new technology brings its own set of challenges, but if history is anything to go by, society adapts. AI will likely follow the same path as PCs and the Internet—becoming a tool that, with time and care, works for everyone. We’ve done it before, and we’ll do it again.
The decisions we make now can shape the future. Especially in robotics – whether it relates to energy use, materials, or design, every choice has real environmental consequences. Balancing innovation with sustainability isn’t always easy, but when they intersect, the opportunities are exciting. I believe they’re two sides of the same coin, pushing robotics forward in ways that benefit both industry and the planet.
Energy-Efficient Robotics Designs
In robotics, one of the most exciting aspects is how much we can reduce energy consumption in industrial processes. However, creating prototypes that meet these efficiency goals while satisfying all stakeholders is no small feat. Finding the perfect balance between sustainability and practicality is always a challenge. But it is one that yields long-term environmental benefits.
Robotics in Sustainable Manufacturing
Precision is key, and robots have it in spades. In manufacturing, robots are minimizing waste and improving resource utilization, aligning with the growing consumer demand for eco-friendly products. By using less material and enhancing efficiency, the modern robotics practice is reshaping production processes in a way that benefits both businesses and the planet.
Extending Product Lifecycles
A lot of what drives sustainability in robotics is thinking about longevity. I’m particularly fascinated by modular robotic systems that extend product lifecycles. It’s not just about efficiency in the short term; it’s about reducing e-waste and building products that are easier to repair, upgrade, or even recycle.
Renewable Energy-Powered Robots
Something else that’s exciting is seeing robots powered by renewable energy sources, like solar-powered agricultural machines. This shift not only reduces reliance on fossil fuels but also opens up a future where the tools we create are in harmony with the environment. The potential here is massive, and it’s something we’re just starting to scratch the surface of.
Challenges in Sustainable Robotics
Of course, there are challenges. Balancing sustainability with stakeholder demands, ensuring costs don’t skyrocket -these are constant considerations. However, with increasing consumer awareness and demand, addressing these issues head-on can lead to innovative solutions that benefit both the environment and the bottom line.
As robotics continues to evolve, its role in promoting sustainability is becoming clearer. From energy-efficient designs to renewable-powered machines, robotics is transforming industries and opening up new possibilities for a greener future. We’re just at the start of discovering what’s possible, but the future looks promising.
While binary computing has come a long way from its initial days and when the transistor was first used in computing, it is now reaching a stage where its capacity will start to become insufficient.
The power of classic computing is based on a straightforward concept: the more transistors there are on a chip, the more powerful the computer will be. And, therefore, to increase computing power, we should be able to squeeze the size of the transistor. We have done this successfully over the years using Moore’s Law. Case in point: the PC of the 1970 had about 300 transistors. The iPhone you carry in your pocket today has 19 billion!
However, the limit to which we can reduce of the transistor is now starting to reach its lower threshold. Which implies that we are close to reaching the peak of enhancing computing power through reduction of transistor size.
This is where the power of quantum computing will act as a powerful next step to continue the innovation in automation and data processing. To understand what quantum computing is, it is important to get a feel for its underlying concept – quantum superposition and its extension into quantum mechanics.
You may have heard of a thought experiment by Erwin Schrödinger, where he put forth a hypothesis – a paradox that a cat may be considered both alive and dead at the same time. Imagine this cat is put in a box with a poison that can be activated under certain conditions. When the box is closed (and you cannot see what’s happening inside), there is a 50% probability that the cat is alive and 50% that it is dead.
This hypothetical phenomenon, commonly known as “Schrödinger’s Cat”, is one of the most fascinating parables to describe quantum superposition. Conversely, once the box is opened and we witness whether the cat is dead or alive – that is a binary (yes or no) position, which leads to the collapse of the quantum superposition.
I have had a deep fascination with quantum mechanics from my very early days. This is mainly thanks to the fact that I studied it at university and found it to be a powerful theory with many applications. The fascination has stayed with me over the years. Today, as quantum computing starts to make strides towards becoming a realizable concept, I still believe that it has the potential to make a remarkable impact on diverse aspects of the way the world lives and does business.
The basic principles of quantum computing
There are a set of terms which describe the various components of quantum computing. While these sound technical, I have done my best to explain what they mean. The first is Qubits (or quantum bits), which act as basic units of information. They use principles of quantum superposition to result in the linear combination of two states. They are interdependent and go through “entanglement” when what happens to one Qubit results in an impact on another. Then come Quantum Gates, which form reversible circuits to help perform basic operations.
Combined with Quantum Algorithms to add structure / process to run an operation, Quantum Decoherence to support environmental interaction and error correction, and Quantum Supremacy which showcases its speed advantage, quantum computing becomes a realizable and implementable concept.
Using the power of quantum computing
Given the exponentially increased speed of computing it enables, the concept can have a transformative impact in several use cases that I can think of, some examples of which include:
Cryptography and its applications in cyber security: quantum key distribution can make messages super-secure and almost impossible to hack and is a powerful alternative to classical cryptography
Optimization by enabling near-accurate predictive models – for use in industrial applications such as supply chain or social infrastructure ones such as traffic management by redistributing cars in dense road networks, assisting in intelligent routing, etc.
Molecular simulation and protein folding helping drive smarter and faster drug discovery
Use of evolved risk analysis and fraud detection to make financial models stronger and banking operations more secure
Impact on material science by driving the discovery and design of new materials, including high-temperature semiconductors
The world is already at a stage where Artificial Intelligence is starting to make great strides in being practicably applicable in real-world scenarios. By enabling enhanced capabilities for machine learning and driving complex data analysis, quantum computing is bound to have a major impact on its efficacy and application in the fourth industrial revolution and beyond.
Are we there yet?
Admittedly, quantum computing is still in its nascent stages. It cannot act as an alternative to classical computing at its current stage of development and is being used to solve specific problems in a small scale today. We have several challenges to address before this starts to become a reality.
The primary one is keeping qubits in superposition. This needs the particles to stay near absolute zero (−273.15 °C). Moreover, quantum superposition is a very unstable state with the need of complex correction processes. In its current stage of development, it is still open to security vulnerabilities, including RSA (asymmetric) encryption threats and the resultant need for post-quantum cryptography. In its untested stages, it could therefore lead to potential breaches of sensitive data and threats related to infrastructure security.
Beyond the conceptual threats is the socioeconomic and geopolitical impact, where development of a powerful tool such as quantum computing could drive a quantum arms race and lead to significant increase in espionage and surveillance.
What lies ahead…
But these risks and challenges cannot impede the march of quantum computing. And, much like any other transformative concept, the world will find a safe way to benefit from its power. Every time a new wave in technology is about to begin, it brings along a wave of fascination and suspicion. AI and IoT are the most recent examples of how such fascination and suspicion has been overcome and how they have become part of our everyday lives today.
A powerful concept such as quantum computing will, when it materializes, disrupt everything that we are accustomed to in terms of ways of working and how we think.
The intersection of cybersecurity and artificial intelligence (AI) is increasingly becoming a critical focus for organizations worldwide. The advancements in AI have not only revolutionized the way we approach cybersecurity but have also presented both challenges and opportunities for global enterprises.
Cyber attacks are getting more sophisticated
As AI evolves, cyber attacks have evolved too. They have become significantly more sophisticated and harder to detect. AI algorithms are now being used to automate attacks, making them faster and more efficient than ever before. This poses a significant challenge for enterprises, pushing them to enhance their cybersecurity strategies to defend against these advanced threats.
AI in Cyber Defense
On the other hand, AI serves as a powerful tool in cybersecurity defense. AI systems can analyze immense volumes of data to identify patterns and anomalies that indicate a cyber threat. This is often done much faster than human analysts can. Thanks to this, AI has become an indispensable component of our modern cybersecurity solutions, helping proactively identify and mitigate potential security risks.
Data Privacy and Regulatory Compliance
With enterprises collecting and processing larger volumes of data everyday, there is a growing need for compliance with strict data protection regulations such as GDPR. AI helps ensure compliance by automating data processing that aligns with these legal requirements. However, this also raises concerns regarding data privacy and the potential misuse of AI in ways that may infringe upon these regulations.
AI-Powered Insider Threat Detection
The detection of insider threats within organizations has emerged as a significant concern. This is where AI can play a crucial role by identifying unusual behaviors or anomalies within an organization that may indicate a threat. However, while we consider the advantages of AI, we also need to look at the other side of the story. This, specifically, raises ethical considerations surrounding employee privacy and the responsible use of AI in monitoring staff activities.
The Need for AI Security Experts
As AI becomes increasingly integrated into cybersecurity, there is a growing demand for professionals with a deep understanding of both fields. This has led to a heightened need for training and education in AI-driven cybersecurity, creating a new niche within the cybersecurity and AI industries.
To sum it up, the convergence of AI and cybersecurity presents a complex landscape with multifaceted challenges and opportunities for global enterprises. It calls for a delicate balance between harnessing the potential of AI for enhanced security while ensuring the ethical and responsible use of AI technologies within the cybersecurity domain. As we navigate through this complex terrain, organizations must adapt their cybersecurity strategies to effectively address the evolving nature of cyber threats in the age of AI.
In the dynamic realm of manufacturing, AI and robotics have propelled us beyond traditional automation into a new era of intelligent machinery.
Picture this: AI (the brain) orchestrating intricate operations, collaborating seamlessly with robotics (the brawn) executing tasks with precision. We are witnessing a shift from traditional automation to a dynamic, intelligent synergy reshaping the manufacturing landscape.
AI-powered robots have transcended the mundane, engaging in data analysis, workflow optimization, and predictive maintenance. The outcome? Enhanced efficiency, reduced downtime, and an elevated standard of production quality.
Consider the automobile industry, where I’ve experienced AI’s revolutionary role in car manufacturing. From personalized production processes to delicately handling intricate components in electronics, AI-driven robotics have streamlined manufacturing. This doesn’t stop at the drawing board; it extends to simulations that have become the cornerstone of testing. AI allows us to navigate diverse scenarios, saving crucial time by eliminating the need for extensive and repetitive testing. From shortening production timelines to replacing practices like crash test dummies, these simulations redefine efficiency in the manufacturing landscape.
The road ahead is exciting and challenging, and I believe workforce displacement and ethical considerations emphasize the need for retraining. From customer support executives to engineers, and quality control analysts learning the intricacies of AI and how to manage its functions, retraining ensures a harmonious integration of technology and human expertise. In my perspective, it’s not about replacing humans; it’s about letting them focus on what truly requires a human touch.
The transformative impact of AI and Robotics in manufacturing is undeniable, reshaping the entire production landscape. But, I must address the elephant in the room—fear. Fear is a natural companion to innovation, especially when introducing something as groundbreaking as AI. However, let’s not fear the unknown but embrace the future of manufacturing—where AI isn’t just a tool; it’s the driving force behind a revolution.