"When Mars rovers hit anomalies, they stop dead": ExoMars Autonomy Pioneer Dr. Mark Woods on Why Silicon Valley is Rediscovering 20-Year-Old Solutions to the $2 Billion Robot Problem

When Mars rovers hit anomalies, they stop dead. Dr. Mark Woods explains the $2.5B problem and why old AI outperforms new.

"When Mars rovers hit anomalies, they stop dead": ExoMars Autonomy Pioneer Dr. Mark Woods on Why Silicon Valley is Rediscovering 20-Year-Old Solutions to the $2 Billion Robot Problem

Here's what nobody tells you about Mars rovers without high level automation or autonomy: when something goes wrong, they don't troubleshoot. They don't improvise. They stop. And nothing happens. Picture a billion dollar machine, 140 million miles from Earth, frozen in place because it encountered an anomaly it wasn't programmed to handle. Every hour it sits idle costs thousands in lost science data. Every day extends mission timelines already stretched across decades.

Woods has spent two decades building intelligent machines for places humans can't easily reach. He taught neural networks to count degraded cash in Scottish ATMs during the AI winter of the 1990s, when mentioning "artificial intelligence" in most grant proposals was career suicide. He built the computer vision and symbolic AI systems that help ESA's ExoMars rover make decisions 20 light-minutes from the nearest human operator. Now, as Executive Director and Chief Strategy Officer at CFMS Ltd, he’s building new tools to accelerate the application of AI in space robotics, and watches Silicon Valley rediscover architectural challenges  that robotics engineers explored in the early 2000s, once again looking at "neuro-symbolic AI" as a solution. Space agencies are commercializing rapidly. Nations are racing to establish lunar bases and mine asteroids. Autonomous systems in the future  make increasingly impactful decisions on behalf of human operators. Woods helped build autonomy For the European Space Agency’s first rover mission to Mars charged with finding evidence of life. Now he's working on lunar missions and finding ways to scale the application of AI and robotics that will prevent the next generation from encountering the obstacles he encountered in previous decades.


From ATM Vision Systems to Mars: You mentioned following threads of interest rather than structured planning. But there's clearly a common thread. What connects teaching neural networks to count cash with building autonomy for Mars rovers?

“It’s about what’s possible,” Woods begins, speaking with the energy of someone who still enjoys connecting unrelated dots. “I’m interested in creating things that are radically new, radically better. Technology is the tool that lets me do that.”

The journey wasn't mapped out beforehand. "I did an undergrad degree, did a master's that led to a research assistant position. The RA led to an opportunity to do a PhD with AT&T, who had a research lab in Scotland at the time." What attracted him then seems almost quaint now. "There was this thing called neural networks that sounded interesting. It was off the back of an AI winter, and I thought: this is interesting technology. The idea that we can somehow, at a primitive level, model the brain."

The application area? Pure practicality. "Cash systems, ATM systems. Trying to solve: how do we make sure these things don't fail when you go looking for money on a Friday night. That used to be a more common problem, because of the condition of the notes."

Those early projects taught him architectural thinking. How to build systems that work reliably in the real world, not just in controlled lab conditions. "Following that interest got me a PhD in AI and mechatronics, which was the precursor to robotics. Those tools are really interesting and offer a lot of options in terms of application areas." The challenges we looked at are more relevant than ever as industry works towards Embodied or Physical AI.

Space was always the destination. "As soon as I got an opportunity to do something in space, I jumped on it. I got lucky. I was going to go to the States, but I had an opportunity to work for a company based in Bristol in the UK." The timing was perfect. "Just about three months after I joined, the UK was trying to put a lander on Mars. It got onto Mars, didn't fully go operational, but the European Space Agency decided they wanted to put a mobile robot on Mars."

That moment defined the next twenty years. "They wanted a self-driving car on Mars, and given my background in research, I thought I could do something there. So I started to get stuck in, and that became a long journey of doing first-of-a-kind autonomy for the mission, which has ended up on the mission."

The connecting thread, he reflects, goes beyond technology. "It's about pulling on the threads of things that spark your interest, that you're passionate about." For Woods, that passion has always been pushing the boundaries of what autonomous systems can achieve, whether that's recognizing degraded currency in Scottish ATMs or enabling future rovers to identify  potential biosignatures on Mars.


With ATMs, you get real-time autonomy. But on Mars, there's that painful 5-to-20-minute communication delay. Have you seen cases where human operators made things worse by jumping in too soon, when the robot just needed time to work through a problem?

Woods flips the question immediately. "I'm going to flip your question on its head. The problem is: the robot doesn't do anything when it has trouble. It's not contextually aware of its surroundings. It knows enough to know that it's stuck, but what that means is it does very little."

He paints a stark picture. "If there's ever an anomaly on the mission, it stops and nothing happens. Imagine an oil rig where there was just no oil being extracted anymore. That's the reason for adding more autonomy to the mission. So the robot can at least self-repair at some level, and it's not dependent on that umbilical cord back to Earth."

Imagine if every time your dishwasher encountered a plate in an unusual position, it stopped completely and emailed you for instructions. And you lived on the other side of the world, so your response took 20 minutes to arrive. And the dishwasher's battery was draining the whole time it waited. That's a Mars rover's existence.

These aren't hypothetical scenarios. "Often these are simple things that might stop its progress. That dependency on the link back to Earth becomes a blocker because we're quite conservative with the mission, given the cost and the time it took to put it there. We don't want to make mistakes. So it tends to lead to quite conservative operations."

The result? Lost opportunity. "We use autonomy as an enabler to help us get more science back from the planet. That's why we do that."

Woods offers a concrete example from the MER missions. "It's not just the fact there's a delay between the robot on Mars and Earth, which can range from five to ten minutes one way, often going via relay satellite, so it can take longer. The amount of data we can get back is really small."

He asks us to imagine the operational reality. "You're driving in a desert for several days, and you get these little thumbnails, little postcards back about what the robot has seen. It's not aware of how important those things are. In some instances, we've literally seen the robot drive right past what would be an important science target."

The operations team faces an impossible choice. "They have this challenge of: do we go back or do we stay on route? In that one case, they went back, and it turns out it was an important discovery. They spent quite a bit of time unlocking and exploring that."

The implications go far beyond one missed target. "It begs the question: have we missed other things because of these delays and this low-bandwidth interaction we have with the robot? The main conclusion is we can't operate robots on more distant planets or moons the way we operate satellites that are quite close to Earth, because the comms is almost real-time. It's not when we go further away."

When Mars Rovers Stop Dead
MR-001
When Mars Rovers Stop Dead
MISSION CRITICAL
Science Return
Financial value of data collected when rover encounters anomaly and ceases all operations. Robot awareness: minimal. Decision capability: none.
$0
Operational Flow Comparison
CONSERVATIVE 6+ HRS
Rover hits anomaly
All operations cease
Wait for Earth signal
Mission control analyzes
Battery drain continues
Resume operations
AUTONOMOUS MINUTES
Rover hits anomaly
Self-diagnoses issue
Attempts self-repair
Resumes if safe
Science continues
Communication Delay (One-Way)
BEST CASE
3 MIN
VIA SATELLITE
ADDS MINS-HRS
WORST CASE
22+ MIN

Do you think the human-in-the-loop issue is top of mind today in the space industry? Or is there a gap between awareness and action?

"Historically, the space industry has been quite conservative. The human has always been in the loop for everything," Woods explains. "With science missions, that was probably okay. You could argue they weren't being productive enough in terms of science return, because we treated them quite conservatively. Hence the autonomy drive."

But the picture is shifting rapidly. "Space is changing dramatically. I'm not just talking about stuff close to home. I'm talking about exploration as well. It's like the internet was there all this time, and nobody noticed." He pauses for emphasis. "This is an important, strategic place for all sorts of reasons."

The commercialization of space creates new pressures. "There is greater demand for automation. Now you've got tension. We always want humans in the loop quite often, but we've got this tension to do things more quickly."

Woods sees the path forward clearly based on current advances: Symbolic AI for high-level decisions that must be explainable and validated. Neural networks for pattern recognition and perception. And architectures that prevent one component's failure from cascading through the system. Research is underway globally to understand how much of the decision stack can be implemented resiliently using LLM technology - the architecture and balance between elements will likely evolve over time. The rovers on Mars today prove this approach works at a small scale.

But human involvement remains necessary for high-stakes decisions. "It's still valuable, especially when assets are being used for different reasons, for high-consequence decisions. For climate change and other things, we really need to understand what's happening to make sense of the information we're getting."

The challenge, Woods suggests, is finding the right level of autonomy for each context. Too much human control and you sacrifice productivity. Too little and you risk catastrophic failures or misunderstandings. The sweet spot depends on mission type, consequence of failure, and the specific capabilities of the autonomous system. A balance the industry is still learning to strike.


You've been building rovers for decades. Is there one failure mode that keeps coming back? The problem every new design team swears they've solved but somehow haven't?

Woods laughs, but there's a note of frustration beneath it. "There's two answers to this. The first is: it's not the quote-unquote smart stuff. It's not the esoteric 'we're going to do the funky new autonomy things and put an AI on board.' It's often the boring stuff."

He pushes the point. "The regular engineering diligence piece. We have procedures and processes, but every so often, something doesn't quite get caught. If you look back at the history of mission failures, a lot of it comes down to some simple check just not being executed, or a communication gap between the system integrator and the supplier. It comes down to something quite simple."

The second pattern is more subtle but equally problematic. "The mistake I see being made is we're still treating the de-risk of technology in a very conservative way. We're taking a long time to work out: will this technology work?"

Woods advocates for a different approach. "I think what we need to do is use smarter tools to speed up how we do that de-risk, and treat it more like a scale-up type approach. Where we're iterating quite fast, learning fast, and onboarding these technologies much more quickly."

The challenge is structural. "As an engineer, you're trying to do work with something that's uncertain with a constrained budget. The art, the 10,000 hours piece, is knowing: how do I take those limited resources and show value over time, so people can keep investing in that approach and making sure you get the technology where you need it to be as quickly as possible."

It's a delicate balance: moving fast enough to innovate while moving carefully enough to justify continued investment. Too slow and promising technologies never reach flight readiness. Too fast and you burn through budgets without proving viability. Woods has spent his career walking that tightrope.

What Actually Ends Missions
MR-004
What Actually Ends Missions
FAILURE ANALYSIS
~95%
Engineering & Process
Simple checks not executed
Communication gaps
Documentation errors
Mechanical failures
Funding withdrawn
~0%
Autonomy/AI
Rare on missions
Heavily validated
Conservative deployment
No recorded failures
Real Mission Failure Examples
Case 001: Communication
System integrator and supplier didn't coordinate. Simple info wasn't transferred between teams. Mission lost.
Case 002: Checklist
One verification step in thousand-item checklist. Missed in pre-launch review. $125M crater on Mars.
Case 003: Units
Metric vs Imperial units. Software from two different contractors. Nobody caught it until reentry.
Case 004: Documentation
Specification document had wrong value. Everyone built to spec. Spec was wrong.
De-Risk Phase Metrics
5-10yr
Time to Prove Tech
$100M+
Cost to Flight Ready
80%
Tech Never Flies

What ends more missions? Autonomy software issues or plain old mechanical failures?

Two answers again," Woods says, falling into the pattern of a professor who's taught this lesson many times. "The first: what ends most missions is either a failure because something regular hadn't been done properly, for whatever reason."

He continues: "What often also ends missions is they've achieved a fair bit, and the political will or the funding will isn't there to maintain the mission. So it'll stop in that fashion. Or maybe a mechanical failure due to operational wear and tear such as on the MER missions."

But then Woods highlights the invisible casualties. "I would also say some missions never get off the board because of the cost of de-risk. There's a whole family of missions that could have been flown, should have been flown arguably, and didn't, because the perceived cost, with some legitimacy, was too much."

Think of all the books never written because the author spent so long researching that they never actually wrote the first chapter. All the startups never launched because the founder spent five years perfecting a business plan instead of building a prototype. We've got the space exploration equivalent: missions that sit in PowerPoint purgatory forever because we can't figure out how to prove they'll work without spending the same amount it would cost to just fly them.

This is where his frustration becomes most evident. "I think that could be addressed with more agile, more flexible ways of onboarding the technology and de-risking." The missions that fail after launch get the headlines. The missions that never launch at all, stopped by technology maturation processes that are too slow, too expensive, too risk-averse, remain invisible.

The irony is painful: Conservative approaches designed to prevent failure end up preventing progress. Missions that could expand human knowledge sit on PowerPoint slides because the industry hasn't figured out how to prove new technologies quickly and affordably enough to justify the investment.

The Targets We Miss
MR-005
The Targets We Miss
SCIENCE LOSS
Incident Report
🤖
"We've literally seen the robot drive right past what would be a very important science target. The ops team faces this challenge of: do we go back or do we stay on route? In that one case, they went back, and it turns out it was an important discovery."
Low Bandwidth Problem
WHAT ROVER SEES
Full resolution imagery
Complete sensor data
Real-time observations
Contextual information
Pattern details
Spatial relationships
WHAT EARTH RECEIVES
Small thumbnails
Compressed images
Limited data packets
Hours/days delayed
Missing context
Incomplete picture
Impossible Choice Matrix
OPTION A: GO BACK
Costs time, battery, mission progress. Might be worth it. Might not. You won't know until it's too late.
OPTION B: KEEP GOING
Stay on schedule. Meet objectives. But maybe you just drove past the discovery of the decade.
THE QUESTION
"Have we missed other things because of these delays and this low-bandwidth interaction we have with the robot?"

You were doing neural nets back when they were counting cash, not generating images. Is there any AI "breakthrough" today that makes you think: we actually tried that years ago? What did you learn then that you wish people today would remember?

"It's a really great question," Woods says, clearly energized by the topic. "What I see is, it's really common for robotics. Because robots do two things." He sets up the dichotomy carefully. "They have to be responsive in the real world, in the same way as if you leave this interview and walk down the street, you've got to be aware, subconsciously, of a car coming. You've got to be mindful of your space. That's a reactive mode."

But humans do something else at the same time. "At the same time in your head, you're probably thinking ahead: what have I got going on? What other stuff have I got on over the next few days? You're planning. You're doing that high-level deliberative stuff."

The robotics world started exploring this decades ago. "There's a series of architectures that were created to allow for those two different forms of intelligence. One's more reactive and fast, the other is more slow-time thinking. Like Daniel Kahneman's "Thinking, Fast and Slow."

Think of it like driving a car: your neural network is the reflexes that slam the brakes when a kid runs into the street. You don't think about it, you just react. Your symbolic AI is the GPS that planned the route, calculated fuel stops, and decided which highway to take. You need both. A car that only has reflexes crashes into the first unexpected detour. A car that only plans but can't react drives straight into the obstacle while still calculating the optimal avoidance trajectory.

Woods has worked with both paradigms. "I've worked with both types of AI. I think people forget there's symbolic AI and there's machine learning. My own background is in machine learning, but there is a symbolic type, and I've used that in space missions. Using it again today for a lunar mission. And by the way, a whole bunch of terrestrial applications."

Here's where the frustration emerges. "I think what people forget is you need that mix of deliberative and reactive stuff. A lot of the focus is more on reactive now, because we're getting this fast prediction from big data, heavy models. Data-centre level compute allows us to do that. I've got access to computer I can put on a robot that I never had before and some of the things we can achieve would have been unheard of a few years ago."

But something's missing. "In many applications you can benefit from a hybrid architecture. I'm seeing it coming back in the form, you might hear the phrase 'neuro-symbolic.' Someone who's been in it for a while is going: that was stuff we were thinking about way back in the early 2000s, trying to work out how to architect that."

Woods isn't claiming credit. He's highlighting that we are seeing a recurrence of similar challenges but also an opportunity to deliver good solutions. "I think if people would look back, they could probably draw a lot of knowledge from the work that was done in a previous time. I'm not just saying that because I might have been involved. I generally think it could inform current work."

The pattern is clear: The computer has improved. The models have scaled. But the basic architectural challenges, combining fast reactive intelligence with slow deliberative reasoning, remain the same. 

The Dual-Brain Architecture
MR-002
The Dual-Brain Architecture
SYSTEM DESIGN
SYSTEM 1
Reactive Intelligence
Neural Network / Fast Thinking
Millisecond responses
Pattern recognition
Obstacle avoidance
Real-time adaptation
Computer vision tasks
Keep robot alive
SYSTEM 2
Deliberative Intelligence
Symbolic AI / Slow Thinking
Strategic planning
Mission objectives
Explainable decisions
Long-term reasoning
Science target selection
Keep mission on track
System Architecture Comparison
NEURAL NETWORKS ALONE
✓ Fast pattern recognition
✓ Great for perception
✗ Black box decisions
✗ Can't explain reasoning
✗ Operations teams won't trust
Status: Rejected for Mars
SYMBOLIC AI ALONE
✓ Explainable decisions
✓ Transparent reasoning
✓ Operations team trusts it
✗ Too slow for real-time
✗ Misses patterns in data
Status: Works but incomplete
HYBRID ARCHITECTURE
✓ Fast reactive responses
✓ Deliberative planning
✓ Explainable high-level decisions
✓ Pattern recognition where needed
✓ Operations teams trust it
Status: Flying on Mars

Mars rovers still use rule-based systems for their decisions, correct? So what happens when you trust a hallucinating LLM to control, say, a nuclear inspection robot? Which "outdated" symbolic AI approaches will outlive this current hype cycle?

"Your question is really on point," Woods says, leaning forward with intensity. "It cuts to the heart of this. It almost cuts to the heart of how we behave as humans, and how that model reflects the tools we're building and how we interact with them."

He starts with a correction that's also a celebration. "Colleagues of mine at JPL, I was delighted. The NASA JPL rover Perseverance found and confirmed a potential biosignature. That's the first step on saying: maybe this thing we found here could have been caused by organics. We don't know. The only way to know will be to get that back to Earth and do a whole bunch of other complex steps."

Then the technical detail: "Part of the software involved in getting the robot to do some of that stuff was symbolic AI. That high-level decision-making piece was involved in that."

But it's not either/or. "There are other bits of technology on some of the Mars rovers that use a more computer vision, machine learning type approach. Because again, they're more pattern recognition based. Back to that point I made: you need both."

Woods explains why symbolic AI won the trust battle for Mars. "When I started to look at trying to use different types of AI to put autonomy on the rover, it was clear to me early on that there's no way the operations team would ever trust something that they couldn't understand. How it had come to the decision it had come to." That's a black box.

The solution was transparency. "We went with symbolic AI because it was closer to how they did things anyway, and it was declarative. It could explain the decisions it made. It was dependent on models that could be validated. It was important to me that the operations team were on board, because there's no way you're going to get new tech in unless the ops team are behind it. And that it was never going to jeopardize the mission. We had as much transparency as possible."

But the picture is changing. "As LLMs and discriminative AI have dialed up in capability, potentially we could do even more interesting things." He's recently published work showing this. "We just put out a paper a few months back where we did work trying to assess and find targets initially in an area the size of London on planet Mars. With the latest deep learning models and our dedicated space data centre -  first of its kind - we scaled out to the planet to find a whole bunch of similar features right across the planet. That shows you the ground-breaking power of the modern deep-learning based AI technology."

Then comes the key distinction. "But to your point: an LLM-based solution, a generative AI-based solution, it can give apparent comprehension. Doesn't necessarily mean it's competent. That's the best way to think about it. It's apparently a comprehended thing and sounds plausible, but it may not be competent."

It's like hiring someone who interviewed brilliantly, sounded confident about every question, used all the right buzzwords. Then on day one, you discover they can't actually do the job. They just knew how to sound like they could. Except with a Mars rover, you can't fire the AI and hire someone else given the cost to develop. The implications for high-stakes applications are obvious. "We have to have safeguards around that. If we can do something really incredible with ML, and that decision might be necessary with respect to overall operations, worth $1.52 billion or whatever, you don't want to lose it. There's going to have to be that hybrid architecture."

And there's another constraint most people forget. "AI is still power hungry, even on embedded processors. The kind of processors I could use in a terrestrial robot are not the same I could use on a Mars rover. They're way less powerful. So there's a de facto cap on the level of tech I can use on one of those vehicles anyway."

Woods sees the path forward clearly based on current advances: Symbolic AI for high-level decisions that must be explainable and validated. Neural networks for pattern recognition and perception. And architectures that prevent one component's failure from cascading through the system. The rovers on Mars today prove this approach works at a small scale. 

Apparent Comprehension vs Actual Competence
MR-003
Apparent Comprehension ≠ Actual Competence
RISK ANALYSIS
TYPE A
Apparent Comprehension
Sounds plausible
Uses correct terminology
Confident responses
Can't explain reasoning
Hallucinates details
No error awareness
TYPE B
Actual Competence
Provably correct
Explainable decisions
Knows uncertainty
Traceable logic
Validated models
Auditable operations
Mission Stakes
$2.5B
Mission Cost
20yr
Development
1
Chance
Operations Team Trust Assessment
NEURAL NETWORKS
"How did it reach that decision?"
"We don't know. Black box."
"Then we can't use it."
✗ Rejected
SYMBOLIC AI
"How did it reach that decision?"
"Here's the logic tree. Here's the validation."
"Okay. We can work with this."
✓ Accepted

Looking 10-20 years ahead, what does human-robot interaction look like? Are we still supervising autonomous systems, or does something different emerge?

"My experience tells me the interaction is going to become way more ambient," Woods says, sketching a future that's already partially visible. "A lot more like this, right? I mean, this is kind of a Turing test in a way, what we're doing here."

He offers a personal example. "I was on a panel earlier today, and I gave an example of using chatbots to learn languages. I'm using one at the minute to learn Irish, which is a language that's not spoken by very many people in the world. But I can have a reasonably sensible conversation with a chatbot. That's interesting enough in terms of that interaction, but I don't have to worry about whether I put a comma in the right place or a semicolon. I don't have to worry about syntax."

That interface, he argues, changes everything. "It's pretty democratic. And I think as we see more physical AI robotics come on track, that interface will become fairly seamless. The actual business of interacting will become fairly seamless."

But the more interesting question is organizational. "The interesting piece is how that changes how we interact as a collective for a given task." Woods has spent years studying this. "We did big system studies where we're going: we've got to explore close to home on Earth, we've got to look out to the moon, to Mars and beyond. Should we use an astronaut for that, or should we use a robot?"

The answer is almost never simple. "You get interesting observations about how we need collaboration. There's the three Cs: cooperation, collaboration, and coordination. There's models we have to start thinking through."

His conclusion surprises some people. "I see this more as a team-based effort with ourselves and intelligent agents augmenting each other in different ways. That feels to me like where this is going to end up, given the capability that's starting to rise on the digital side."

Humans and robots as colleagues, each bringing different strengths to shared goals. The astronaut who can improvise and make intuitive leaps. The robot that can work for months without rest and process sensor data beyond human perception. The AI that can identify patterns across planetary-scale datasets. All working together, mediated by interfaces so natural you forget you're talking to a machine.

But that future requires solving problems we haven't solved yet. The trust problem, the competence-versus-comprehension problem, the hybrid architecture problem. Woods is working on all of them at once, building systems for lunar missions while trying to teach the industry lessons it should have learned decades ago. Because the robots aren't replacing us. They're extending us. And that extension has to be built on architectures that work, not just ones that sound plausible.


Author's Analysis

2035. A lunar mining robot encounters a cavern system beneath the regolith that wasn't in any geological model. What happens next depends on which version of the future we build.

Version A: The robot freezes. Sends a data packet to Earth. Waits. Mission control spends six hours in emergency meetings. After an extended period, it gets permission to investigate. But the cavern entrance has partially collapsed from thermal cycling. Opportunity lost.

Version B: The robot runs two parallel processes. The first evaluates immediate danger in milliseconds. The second reasons methodically about mission implications. One keeps it alive. The other decides whether to explore or mark the location and move on. Both work together. Neither dominates. The robot investigates safely within two hours and discovers water ice deposits that change the entire mining operation.

Mark Woods is building Version B. But most of the AI industry is building Version A without realizing it. When Woods says "an LLM can give apparent comprehension but not necessarily competence," he's pointing at a challenge people like him face in exploiting the power of LLM’s on critical missions. A chatbot that hallucinates citations is embarrassing. An autonomous system that hallucinates operational decisions on a $2 billion Mars mission ends careers. Woods and the wider Space AI community have explored a way forward with NASA JPL demonstrating how this works in real mission conditions. He and his collaborators convinced operations teams to trust autonomous systems by choosing symbolic AI that explains its reasoning over neural networks that couldn't,provide the explainability they needed. This type of autonomy has been used in the rover workflow by NASA JPL to confirm the first potential biosignature on another planet using hybrid architectures from the early 2000s. He believes there are opportunities and lessons to draw on from early work in robotic autonomy and that the answer most likely lies in using the right combination of technologies to achieve mission success create step-change in our exploration capabilities.

The future Woods describes has robots extending humans across distances our biology can't handle. Robots that stop when genuinely stuck instead of hallucinating plausible-sounding solutions. Robots whose decisions can be audited and understood when they need to be. But getting there requires learning taking advantage of many research threads and going beyond the hype-cycle to deliver robust AI and robotic applications when the stakes are high. When the first truly autonomous lunar mission encounters something unexpected at 3 AM Houston time with no communication window, will it have the hybrid architecture to handle it, or will it just freeze like every overly conservative rover before it? The robots aren't replacing us. They're extending us. But that extension has to be built on architectures that work, not just ones that demo well.


About Dr. Mark Woods

Dr. Mark Woods is a deep-tech innovator, Executive Director, and technical strategist whose career spans more than two decades at the frontier of AI, robotics, and autonomy. He has designed and delivered intelligent systems for some of the most demanding environments, from deep industrial operations to planetary exploration.

He led the development of a key autonomy element for the European Space Agency's ExoMars rover mission, and related research spanning autonomous navigation, target detection, and high-level decision-making. Since then, he has directed multiple first-of-a-kind developments in applied AI, robotics and autonomous systems, resulting in patents, peer-reviewed publications, and recognized technology firsts.

As Executive Director and Chief Strategy Officer at CFMS Ltd, Mark drives business strategy and innovation vision, helping shape the UK's deep-tech ecosystem and establishing new national capabilities in AI, autonomy, and robotics. He completed his PhD with the AT&T Group, combining advanced research in AI with industrial application, a foundation that continues to inform his work at the intersection of innovation and delivery.

He serves on the UK Space Agency's Space Exploration Advisory Committee (SEAC), the UN Expert Advisory Group on Guidelines for Lunar Exploration (EAGGLE), and several industrial and academic boards. As a thought leader, he has collaborated with organizations including BP, NVIDIA, NASA JPL, Bank of England, Schlumberger, Network Rail, Sellafield, and the UK Department for Science and Technology.

Mark is focused on step-change innovation, combining imagination, technical depth and strategic insight to create intelligent systems with measurable lasting impact. He works on developments which seek to move beyond incremental improvement, demonstrating how AI, robotics/physical AI and autonomy technologies can redefine what is possible across multiple sectors.

For more information, contact Mark directly on LinkedIn