As artificial intelligence (AI) continues to redefine automation and autonomy, nowhere is its impact more evident than in unmanned and robotic systems. The convergence of AI, sensors, and mechanical design has created vehicles capable of making split-second decisions, alongside new legal and regulatory questions when those systems fail to perform as expected.

To unpack the evolving landscape of robotics and AI-driven vehicles, we spoke with Dr. Moiz Khan, an accomplished engineer and AI expert whose work spans autonomous system design, machine learning, and regulatory compliance for safety-critical technologies. Let’s see what he has to say about the distinctions between traditional and generative AI, the technical and legal complexities of unmanned vehicles, and the emerging disputes shaping this rapidly advancing field.

Q: Let’s start with a basic distinction. Everyone hears about “AI” — but what’s the difference between traditional AI and generative AI?

Dr. Khan: Traditional AI is really about pattern recognition and decision-making. Think of a perception system in a drone that detects objects — it’s trained on a dataset, and it classifies new inputs into categories, which correlate to real-world objects. Generative AI is different. Instead of just recognizing patterns, it creates new content using natural language and its “past experiences”. That could mean generating synthetic training data for weather scenarios or producing navigation plans that weren’t explicitly programmed.

The upside is more flexibility. The downside is the potential for “hallucinations” — outputs that look convincing but aren’t reliable. And that introduces new risks when you’re dealing with safety-critical systems like unmanned vehicles, where domain experts are needed to vet and validate results.

Q: How does the integration of robotics and AI make unmanned vehicles so technically complex?

Dr. Khan: Since you are merging sensors, real-time algorithms, planning systems, and physical hardware that all must work together, the challenge lies in accuracy and reliability. A delay in sensor fusion or inaccurate processing of visual data can result in a planning error, which can ripple into a control problem. This can result in visual perception errors leading to improper prioritization of movement. Due to the complexity of the system, failures can exist in several locations, and decoding whether the issue is a technical or logistical one requires a root cause analysis by an expert.

Q: What kinds of failures usually spark disputes when it comes to AIrobotics, and automation?

Dr. Khan: The classic ones are perception errors — missing a pedestrian, misclassifying an obstacle, or failing to adapt in bad weather. Then you see planning or decision-making failures, like choosing an unsafe path or not handing control back to a human in time.

Q: Algorithm transparency has been a big topic lately. How does that come up in disputes?

Dr. Khan: Transparency — or the lack of it — is huge. Courts want to know why a vehicle made a decision. Was it because the system never saw the obstacle? Or did it see it and misrank the risk? Without explainability or clear logs, you’re left with black-box behavior. That makes it harder for companies to defend themselves and for plaintiffs to prove negligence.

Q: What about perception systems like sensor fusion and computer vision?

Dr. Khan: Those are often front and center. If a manufacturer knows their sensors perform poorly in glare or fog but doesn’t disclose that limitation, you might see a failure-to-warn claim. Or if the fusion algorithm assumes sensors are perfectly synchronized — and they’re not — that can lead to product liability claims. Attorneys will often want to dig into calibration records and training datasets to see if the risk was foreseeable.

Q: Software updates and version control — why do they matter so much in investigations?

Dr. Khan: Because the behavior of an unmanned system can literally change overnight with an update. If the system was updated a week before an accident, you have to ask: did that update introduce a bug? Was it tested? Were operators trained on the new version? Was the hardware working correctly and compatible with the software update. For example, think about an old smartphone that receives a new software update and suddenly faces compatibility and usability issues. Without strong version control and documentation, it’s almost impossible to reconstruct what the system “knew” at the time.

Q: Do robotics and AI also show up in intellectual property disputes?

Dr. Khan: Absolutely. Training data, proprietary control algorithms, and even simulation environments are highly contested. If a company uses another’s data without a license, or if two vendors end up with nearly identical navigation policies, suddenly you’re in a trade secret or patent dispute. Generative AI complicates this further because you may not always know what data went into the training or where it came from without proper documentation.

Q: When legal teams try to reconstruct vehicle behavior, what are the biggest challenges?

Dr. Khan: It is important to look at the data logs. Either they’re incomplete, they’re in proprietary formats, or they don’t capture the intermediate reasoning of the AI. You might know the output — “move left” — but not what intermediate detections or thresholds led there. Without synchronized data across sensors and models, it can be challenging to reconstruct the movement.

Q: How do regulatory requirements interact with these technical realities?

Dr. Khan: There are regulations in place to provide some guidelines. However, with the rapidly changing AI environment, there is a need for engineers to hypothesize and foresee unexpected challenges. FAA rules for drones or state requirements for autonomous cars provide a baseline. But in practice, these systems face conditions regulators haven’t fully accounted for — like model drift or cybersecurity threats. In litigation, compliance with standards is often used as a defense, while noncompliance or misrepresentation can be a plaintiff’s supporting argument.

Q: From your perspective, what’s the checklist when evaluating an unmanned vehicle failure?

Dr. Khan: I would look at the raw sensor data, the physical model and software version, operator inputs, environmental context, calibration records, and the system’s update history. Then I would reconstruct the chain of events step by step: what the system saw, how it processed it, what decision it made, and how it executed. That’s how you separate defect from misuse from unforeseeable conditions.

Q: Finally, what emerging trends do you think will drive the next wave of disputes in the AI and robotics space?

Dr. Khan: Three stand out. First, the use of generative AI for planning and simulation — because of hallucinations and origin issues. Second, continual learning systems that change and adapt behavior after deployment — which raises tough questions about liability over time. And third, the rapid growth of sensor data — which will drive more privacy and surveillance-related claims.

Expert Teams in Advanced Technology

Have an AI-related expert witness need? Reach out to learn more about our expert teams in technology, created to address what we expect to be the key areas of litigation in the evolution of AI and autonomous capabilities.

Stay Informed
Stay up to date on our latest news and insights.
Subscribe
Read More
looking at a building with a blue sky reflecting in the background
Q&As
Oct 21, 2025
Inside the FDA: Insights from Former Regulators

In this latest Q&A, we asked our Former FDA Expert Team to weigh in on some of the most pressing questions facing the Agency and the industries it oversees, from balancing innovation and safety to navigating new technologies like AI and real-world evidence.

Read Now
grey scale camera tilted upward, black sky
Q&As
Aug 13, 2025
Key Insights on Administrative Law Judges in Section 337 Investigations

We recently spoke with Jeffrey Dorfman, a seasoned New York attorney, to explore how ALJ-specific practices are influencing the course of investigations.

Read Now
multi colored shipping containers stacked on top of one another
Q&As
May 23, 2025
2025 Supply Chain Outlook: Expert Analysis on Disruption, Risk, and Regulation

We spoke with an experienced supply chain expert to identify current industry pain points and the most pressing challenges companies face today.

Read Now
vehicle tail light, red tail light
Q&As
Mar 12, 2025
An Automotive Expert’s 2025 Outlook: What’s Next for EVs, AI, and IP Disputes

We spoke with automotive industry expert Julie Wurmlinger for her thoughts on the industry’s legal landscape.

Read Now
Abstract reflective swirl design using black, brown, orange and gray colors
Q&As
Feb 18, 2025
Rethinking Experience: Why New Expert Witnesses Deserve a Closer Look

We asked DOAR members how strategic preparation helps emerging testifying experts ease concerns and strengthen litigation cases.

Read Now
Close up shot at an agave plant leaves in grey black colour. Background
Q&As
Jan 13, 2025
Examining 2024 ANDA Litigation Activity

We spoke to Michael Crowe to gain insights into Hatch-Waxman litigation trends in 2024.

Read Now
cool color gradient, going from light green to a cool white, transitioning to a light blue and black circle
Q&As
Dec 11, 2024
An Expert's Look Inside the ITC's NEXT Advocates Program

To better understand the program, we sat down with an engineering expert and educator who participated in this ITC NEXT Advocates event.

Read Now
Red wavy texture with green light coming through waves
Q&As
Oct 25, 2024
Q&A: The Impact of Changes to Federal Rule of Evidence 702

Learn how the amendment to FRE 702 has created a fundamental shift in the way attorneys seek expert testimony for their matters.

Read Now