How to prevent Accumulating Regulatory Debt Under EU MDR. 2026

by Odelle Technology

EU MDR Classification for Digital Health Software: Why “Not Intended for Medical Use” Fails and How Regulatory Debt Emerges “Not Intended for Medical Use”

Executive Summary

A persistent and dangerous misconception pervades the digital health industry: that a legal disclaimer stating “not intended for medical use” is sufficient to position a software product outside the scope of the European Union’s Medical Device Regulation (MDR). This belief is not merely incorrect—it represents a fundamental misunderstanding of how modern medical device regulation actually functions, and it has become one of the most common sources of what can be termed “regulatory debt” among emerging health technology companies. Under the MDR framework, which has been in force since May 2021, classification is not determined by what a manufacturer claims in a disclaimer, but rather by the totality of evidence surrounding a product, including its actual functionality, the claims made across all marketing materials and communications, and the outputs it generates in practice.[1][3][4] The regulation explicitly defines the intended purpose to encompass “promotional or sales materials or statements,” meaning that website copy, app store descriptions, investor pitch decks, and even casual representations made by sales teams constitute regulatory evidence.[1] Academic research from leading medical technology institutions confirms that software qualifies as a medical device when it performs functions related to diagnosis, prediction, monitoring, or treatment, regardless of any disclaimer to the contrary.[6][8] When companies operate with misaligned narratives—claiming wellness positioning while building medical-grade functionality—they accumulate what amounts to regulatory debt: a latent structural liability that typically surfaces at the worst possible moment, during venture capital due diligence, regulatory audit, or following a user safety incident. Correcting this misclassification requires extensive rework, including reclassification, implementation of a full quality management system, restructuring of the software lifecycle, and potentially generating clinical evidence, often consuming 12 to 24 months or more.[4][16][4] The path forward is binary: companies must either commit to strict wellness positioning with absolutely no medical claims across any communication channel, or they must design and develop as a medical device from inception, incorporating compliance with IEC 62304 for software lifecycle, ISO 14971 for risk management, and ISO 13485 for quality systems from day one.[1][9] There is no safe middle ground, and the longer a company delays choosing one path, the greater the eventual cost of correction.

Understanding How MDR Actually Defines Medical Devices and Intended Purpose

The legal foundation for understanding why disclaimers fail lies in the precise wording of the EU Medical Device Regulation. Under Article 2(1) of MDR, a medical device is defined as “any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the specific medical purposes.”[1][3] Those specific medical purposes include diagnosis, prevention, monitoring, prediction, treatment, and alleviation of disease.[1] What makes this definition particularly important for software companies is that “intention” is not a single, static declaration made once during product launch and then left unchanged. Rather, intention is dynamically reconstructed through examination of all available evidence about what the product does and what the manufacturer communicates about it. The regulation goes further in Article 2(12), providing an expansive definition of intended purpose: “the use for which a device is intended according to the data supplied by the manufacturer on the label, in the instructions for use or in promotional or sales materials or statements.”[1] This language is critically important because it explicitly includes promotional and sales materials, shifting the regulatory focus beyond technical documentation to encompass every form of communication a company produces. This means that what appears on a company’s website, what is written in an app store description, what is said in a pitch deck to investors, what marketing copy emphasises about the product’s benefits, and even informal statements made during sales conversations all constitute regulatory evidence regarding intended purpose.[1][3]

The implications of this definition are profound and often underestimated by founders and early-stage teams. Regulators, auditors, and competent authorities do not simply read the instructions for use and accept the manufacturer’s self-classification at face value. Instead, they conduct what amounts to a forensic investigation of all available information about the product, asking: “What does this manufacturer actually intend this product to do, based on the totality of evidence available?” This investigation includes reviewing website marketing claims, examining how the product is positioned in pitch materials, analysing what the software actually outputs and does, reviewing user-facing language, and assessing how the product behaves in clinical practice.[3][8] When regulators uncover inconsistencies between what a company claims in a disclaimer and what it claims in marketing materials, or when they observe that the product’s functionality contradicts its wellness positioning, they interpret the evidence in favour of the most medically significant reading. In regulatory terms, this is not seen as entrapment or unfairness; it is seen as a necessary protection mechanism to ensure that products with clinical functionality are subjected to the appropriate level of scrutiny and regulatory oversight, in the service of patient safety. The disclaimer, from this regulatory perspective, is not a legal defence but rather evidence of misleading communications, which, in turn, triggers additional compliance concerns under the MDR’s transparency and truthfulness requirements.[1][3][4]

The Academic and Legal Consensus: Software Becomes Medical Through Intent and Function

The understanding that software is not exempt from medical device regulation, and that disclaimers cannot override functional reality, is not novel or controversial in academic or legal circles. Peer-reviewed research from leading institutions has repeatedly confirmed that software becomes a medical device when it claims or exhibits a medical purpose, and that the product’s format (whether an app, web application, desktop tool, or embedded software) is irrelevant to this classification.[6][8][9] Keutzer et al., writing in JMIR mHealth and uHealth in 2020, provide a definitive statement on this matter: “When developers of software or mobile apps claim that their product has a medical purpose, it becomes a medical device and must bear a CE mark.”[6] This statement encapsulates a critical insight—that medical device classification is determined by the claimed purpose and functionality, not by the product’s physical form factor or by statements disclaiming medical intent. The same researchers note that qualification for medical device status is determined by intended use and mechanism of action, not by the fact that a product happens to be “just an app.”[6] This distinction is important because it corrects a common misconception among founders that smaller, software-only solutions somehow occupy a different regulatory space than traditional hardware medical devices.

Legal analysis of EU case law further reinforces this understanding. The European Court of Justice (CJEU) analysis in Snitem, as documented by legal scholars examining EU medical device jurisprudence, established a principle with direct bearing on digital health: software can qualify as a medical device even where it does not act directly on or in the human body, provided that its intended purpose falls within the medical-purpose categories established in the legislation.[8] This ruling effectively eliminated any argument that software-only solutions, because they do not involve physical intervention, somehow occupy a regulatory grey zone. Instead, the Snitem case law established that the criterion for medical device status is functional purpose, not physical embodiment. A piece of prescription-support software, for example, may qualify as a medical device if it is intended to support clinical decisions about medication, even though it never physically touches a patient. This principle has profound implications for the digital health industry because it means that the vast majority of software that claims to support health decisions, to monitor health conditions, or to provide clinical intelligence is potentially subject to the full medical device regulatory framework.[8] The question is not whether such software could theoretically be a medical device, but when the software will be recognised as such by regulators, either proactively or in response to an audit or an incident.

Academic analysis of Clinical Decision Support (CDS) systems further illuminates the breadth of factors that trigger medical device status. CDS is defined in regulatory and academic literature as “software that combines patient data with medical knowledge to assist healthcare professionals in making clinical decisions.”[3][6] When CDS performs this integration and delivers outputs that influence clinical decisions, it enters the regulated space. This is a crucial insight for many digital health companies because many founders believe their products are merely “informational” or “advisory” and therefore not regulated. However, the CDS literature makes clear that the moment software is designed to influence clinical decision-making, even if the final decision remains with the healthcare professional, it has crossed into regulated territory.[3][6][8] This has direct implications for products that monitor physiological data, flag abnormalities, or generate risk assessments, because these outputs are specifically designed to influence how a healthcare provider or the patient themselves makes health-related decisions. The EU’s interpretation of “monitoring” under the MDR framework extends to products intended to monitor physiological processes, which is why respiratory monitors, sleep apnoea detectors, glucose monitors, and cardiac rhythm monitors are frequently classified as medical devices.[1][3]

The Three Layers of Intention and Why Disclaimers Lose

The failure of disclaimers to protect digital health companies from medical device classification can be understood through a conceptual framework that recognises three distinct layers of intention surrounding any product. The first layer is what might be called “declared intention”—the explicit statements a manufacturer makes about what their product is and is not intended for, typically found in terms of service, disclaimers, and formal documentation.[1][4] This layer carries the lowest regulatory weight because it is readily understood as self-serving and potentially inconsistent with evidence elsewhere. A company has an obvious incentive to declare low-risk, non-medical intent to avoid regulatory burden, and regulators do not simply accept such declarations at face value.[1] The second layer is “communicated intention”, which is what the manufacturer actually emphasises when marketing the product to consumers, healthcare providers, or investors.[1][3][4] This layer carries significantly greater regulatory weight because it represents what the manufacturer genuinely believes the market wants and what the company actually intends to deliver. Website marketing that emphasises “early disease detection,” app store descriptions that highlight “clinical-grade monitoring,” and investor pitch decks that claim “diagnostic accuracy”: all of these communications represent genuine intent and are difficult for a manufacturer to later disclaim. A company cannot simultaneously tell investors that its product performs sophisticated disease prediction and tell regulators that it is not intended for medical purposes.[1][3] The third and most weighty layer is “functional intention”, which is what the software actually does when deployed in practice.[1][4] When algorithms are specifically designed to analyse physiological data, identify patterns consistent with disease, generate risk scores, or trigger alerts based on clinical thresholds, the software is performing medical functions regardless of any disclaimer. Functional intention, from a regulatory perspective, is the most honest reflection of actual purpose because it reveals what the manufacturer actually built and what behaviours the product exhibits in the world.

When these three layers of intention conflict, and they frequently do in digital health companies, regulatory frameworks apply a consistent rule: the most medically significant interpretation prevails.[1][3][4] This is not an arbitrary application of regulatory judgment; it is a logical consequence of the MDR’s core mission: ensuring that products performing medical functions are subject to appropriate oversight to protect patient safety. Consider a concrete example: a respiratory tracking application that includes a disclaimer stating “not intended for medical use” but whose marketing emphasises “early sleep apnoea detection” and whose algorithm is specifically trained to identify breathing patterns associated with sleep apnoea.[1] From the perspective of medical device regulation, this product presents a clear case where the declared intention (wellness) contradicts both the communicated intention (disease detection) and the functional intention (algorithmic disease identification). In such a case, regulators will look at all three layers and determine that the product is intended to perform a medical function—namely, identifying sleep apnoea. The disclaimer does not erase other evidence; if anything, its presence, combined with clear medical claims and medical functionality, creates the appearance of deliberate misleading communication, which compounds the compliance concern.[1][3] The company is now not just facing regulatory classification as a medical device; it is also facing potential enforcement action for making medical claims without supporting evidence and for apparently attempting to circumvent regulations through disclaimers.[1]

Mapping the Regulatory Trigger: When Software Crosses from Wellness to Medical

Understanding precisely when software crosses from the wellness category to the medical device category is essential for founders and product teams making strategic decisions. The academic and regulatory literature identifies consistent functional triggers that reliably indicate the status of medical devices.[3][6][8][9] Software becomes a medical device when it performs any of these key functions: it analyses physiological or clinical data beyond simple storage or display; it interprets patterns in health data; it generates outputs (scores, alerts, recommendations, flags) based on clinical logic; or it provides information specifically designed to support clinical decision-making.[3][6][8][9] The distinction between different types of products becomes clear when examined through this functional lens. A meditation application that guides a user through breathing exercises to promote relaxation can plausibly remain in the wellness category, because it is not intended to analyse health data, interpret patterns, or generate clinically meaningful outputs.[1][3] In contrast, an application that collects breathing data, analyses the patterns using algorithms trained to identify sleep apnoea, and generates alerts when suspicious patterns are detected clearly performs medical functions regardless of any accompanying disclaimer.[1][3][6] A simple health data dashboard that stores and displays user-entered glucose readings falls into a lower-risk category than one that automatically interprets glucose patterns, compares them to clinical thresholds, and generates clinical insights about glucose control.[1][3] The factor that tips a product from lower to higher risk is the presence of interpretation and clinical implications.[1][3][8]

This functional triggering mechanism is not discretionary or subject to debate. It is encoded in the MDR’s classification rules and reinforced across multiple MDCG guidance documents.[1][3] Rule 11 of MDR specifically states that “software intended to provide information which is used to take decisions with diagnosis or therapeutic purposes is classified as class IIa.”[6][8] This means that any software designed to provide information that supports diagnostic or treatment decisions automatically enters Class IIa territory at a minimum, requiring CE marking, notified body review, and compliance with the full medical device regulatory framework.[6][8][1] For many digital health companies, the realisation that their product falls under this rule comes too late—often during fundraising due diligence or when preparing for market entry in Europe.[1][16][4] At that point, the company discovers that what it believed was a wellness product is actually, from a regulatory standpoint, a medical device. This discovery frequently happens during the moments of highest commercial pressure and lowest financial flexibility, because capital-raising processes, partnership negotiations, and regulatory approvals all have fixed timelines that cannot easily accommodate a fundamental reclassification of the product.[4][16][4]

The practical distinction between product types can be illustrated using a matrix that has become standard in medical device regulatory discussions.[3][6][8] A breathing application that “helps users relax” remains in wellness if it does not analyse breathing data or generate clinical outputs. A respiratory tracker that logs breathing rate but does not interpret the data can remain in lower risk categories. But an AI respiratory tool that analyses breathing signals and generates alerts flagging “possible sleep apnoea” or “abnormal respiratory pattern” is definitely a medical device and requires Class II or III certification depending on the specifics of the claim and the mechanism of action.[1][3][6][8] A health data dashboard that merely stores values entered by users falls short of the compliance standards set by a system that automatically interprets those values against clinical benchmarks and generates alerts or recommendations.[1][3] A remote monitoring platform that displays data transmitted from a wearable device may be lower risk than one that performs algorithmic analysis of that data to predict patient deterioration.[1][3][8] The consistent thread through all of these distinctions is that interpretation, analysis, and clinical implication drive regulatory classification upward, while passive data storage and user-initiated actions keep products in lower-risk categories.[1][3][8]

Regulatory Debt: The Hidden Cost Structure of Misclassification

The concept of “regulatory debt” emerges from observing the structural pattern of misclassification in digital health companies. Regulatory debt occurs when a company has built a product, claims, and a go-to-market strategy as if it were a wellness device, but the product’s actual functionality indicates it is a medical device.[1][4][16][4] At the moment of misclassification, the company typically has minimal or no infrastructure to support medical device compliance. The software has not been developed in accordance with IEC 62304, the standard for medical device software lifecycle management, which requires documented design history, traceability of requirements to implementation, systematic verification and validation, and controlled change management.[9][23] The company likely has no formal risk management program under ISO 14971, which is required for all medical devices and mandates systematic hazard identification, risk assessment, and documented risk control measures.[9][23] There is no quality management system under ISO 13485, which requires documented procedures, training, supplier management, design controls, and process management across the entire organisation.[9][23] There is typically no clinical evaluation strategy or plan for generating the clinical evidence needed to support the product’s claims and demonstrate safety and effectiveness.[6][8][16] Usability engineering is usually not formalised, cybersecurity controls are not typically medical-grade, and post-market surveillance systems are not in place.[9][16][23] This accumulated misalignment between product reality and regulatory infrastructure is what constitutes regulatory debt.

The full scope of what companies inherit once they are correctly classified as medical devices is substantial and often shocking when first encountered. A comprehensive 2025 analysis of Software as a Medical Device (SaMD) requirements published in JMIR identified the complete compliance burden as including quality management system implementation, technical documentation, risk management processes, software lifecycle controls, clinical evaluation or performance evaluation, usability engineering, cybersecurity management, and post-market surveillance systems.[16] Each of these represents not merely documentation to be created, but organisational capabilities to be built, processes to be established, and, often, architectural changes to the software itself.[9][16][23] Quality management system implementation under ISO 13485 is not a documentation exercise—it represents organisation-wide change, including management responsibility, resource allocation, training systems, supplier control, design controls, production controls, verification and validation, complaint handling, and internal auditing.[9][23] Risk management under ISO 14971 is not a one-time exercise but a continuous obligation throughout the product lifecycle, requiring hazard analysis, risk evaluation, risk control, and residual risk evaluation, with documentation and traceability at every stage.[9][23] Software lifecycle compliance under IEC 62304 requires documentation of software development planning, requirements analysis, architectural design, detailed design, unit implementation and verification, integration and integration testing, system testing, and release approval, all with full traceability between requirements, design, code, and tests.[9][23] Clinical evaluation represents a separate burden, requiring the collection and analysis of clinical data to demonstrate that the device meets State-of-the-Art standards and provides an acceptable risk-benefit profile.[6][8][16]

The costs and timelines associated with building this infrastructure after the fact are substantial. Clinical evaluations alone typically cost €100,000 to €500,000, depending on device class and study complexity, according to recent reviews of the MDR’s impact on medical device manufacturers.[4][16][4] A comprehensive 2024 JMIR review examining the impediments imposed by MDR on software as a medical device noted that the regulatory framework has created more complex requirements for development, launch, and post-market surveillance, leading manufacturers to delay launches, discontinue products, or deprioritise EU market entry altogether.[4][4] For small and medium-sized enterprises in particular, the compliance burden has been disproportionate, with reported compliance costs ranging from €30,000 to €250,000 depending on device class and study complexity.[4][4] When added to the costs of documentation, QMS implementation, software lifecycle restructuring, and potential clinical evidence generation, the total cost of correcting a regulatory misclassification can easily reach €1.5 million to €3 million, with timelines extending to 12 to 24 months or more.[4][16][4] This is not merely a regulatory compliance cost; it is a fundamental restructuring of the company’s product development, quality, and go-to-market operations, typically consuming significant engineering, quality, and regulatory resources during months when those resources might otherwise be directed toward product innovation, market entry, or revenue generation.[4][16][4]

The timing of when regulatory misclassification surfaces is rarely under the company’s control. In most cases observed in the industry, the problem emerges during venture capital due diligence, when professional investors and their advisors conduct comprehensive background checks on the company’s regulatory status.[1][4][16] Investors are now highly aware of regulatory debt as a risk factor, and competent due diligence processes specifically investigate whether a company’s claimed regulatory status matches the product’s actual functionality.[1][4] When discrepancies are discovered, investment rounds are delayed, valuation is reduced, and sometimes deals collapse entirely because investors recognise the magnitude of the remediation effort required.[1][4][16] In other cases, the problem surfaces during strategic partnership discussions or acquisition due diligence, when sophisticated partners conduct regulatory and compliance assessments and discover that the company is not where it claims to be in its regulatory journey.[1][4][16] A third scenario involves regulatory audit or inspection, whether triggered by market surveillance, adverse events, or regulatory initiative. When a competent authority or notified body investigates a company and discovers that medical device software is being marketed without proper classification or CE marking, the consequences can include enforcement action, product seizure, fines, and reputational damage.[1][3][4] The worst-case scenario involves user safety incidents or adverse eventsthat trigger regulatory investigations and often expose pre-existing compliance gaps, which then drive comprehensive enforcement.[1][3][4]

The Compliance Framework: What Medical Device Status Actually Requires

Understanding the full scope of what medical device compliance requires is essential for making informed strategic decisions about product positioning and development. The requirements framework is not theoretical or aspirational; it is encoded in multiple interconnected standards and regulations that operate as a comprehensive system.[9][23] The legal foundation comes from the MDR itself, which establishes that any device intended for a medical purpose must comply with the regulation’s requirements before being placed on the market in the EU.[1][3] The MDR itself outlines high-level requirements, but delegates detailed technical implementation to international standards referenced normatively in the regulation.[1][3] The primary standards that underpin medical device software compliance are ISO 13485 (Quality Management System for medical devices), ISO 14971 (Risk Management), IEC 62304 (Medical device software lifecycle), IEC 62366-1 (Usability engineering), and IEC 81001-5-1 (Cybersecurity).[9][23] These standards operate as an integrated system, in which compliance with one typically requires compliance with the others.[9][23]

Implementing a quality management system under ISO 13485 serves as the organisational foundation for medical device compliance.[9][23] ISO 13485 requires that a manufacturer establish and maintain a documented quality management system that ensures the design, manufacture, sterilisation (where applicable), installation, and servicing of products meet specified requirements.[9][23] The standard requires management responsibility with a defined quality policy and objectives; resource management, including personnel competence and infrastructure; product realisation, including design and development controls, supplier management, and production controls; and management of operations, including internal audits, corrective and preventive actions, and management review.[9][23] For software-focused companies, the quality management system often requires significant organisational change, because many early-stage tech companies operate with agile development methods, minimal documentation, and informal processes, whereas ISO 13485 requires documented procedures, controlled change management, and formal approval processes at defined gates.[9][23] This difference between tech culture and medical device culture is one of the most common friction points when software companies transition to medical device status.[9][23]

Risk management under ISO 14971 is a continuous, structured process that must be implemented throughout the product lifecycle.[9][23] Risk management begins with the identification of hazards, potential sources of harm, considering both intended use and foreseeable misuse of the device.[9][23] For each identified hazard, the risk is estimated by combining the severity of potential harm with the probability of occurrence.[9][23] Where risks are not acceptable based on the manufacturer’s risk acceptability criteria, risk controls are specified and implemented, either through design changes to reduce probability or through protective measures to reduce severity.[9][23] After risk controls are implemented, residual risk must be evaluated, and in some cases, additional risk controls may be needed.[9][23] The key principle in ISO 14971 is that risk control must be proportionate to risk. High-risk situations require more robust controls, while lower-risk situations may allow simpler controls.[9][23] For software, this means that identified hazards (such as calculation errors, data corruption, data loss, unauthorised access, or failure to alert appropriately) must be assessed for risk and controlled through either software design changes or external measures.[9][23]

Software lifecycle compliance under IEC 62304 requires comprehensive documentation and control of all software development activities from initial concept through end-of-life.[9][23] The standard requires that software development begin with a software development plan that defines scope, resources, interfaces, and methodology.[9][23] Requirements analysis must produce a software requirements specification (SRS) that documents all functional and safety requirements, derived from the product’s intended use and the hazards identified in risk management.[9][23] Architectural design must specify how the software will be structured to meet requirements, and detailed design must specify at the unit level how each component will implement the architecture.[9][23] Unit implementation and verification require that each software component be coded and verified against the detailed design.[9][23] Integration and integration testing require components to be combined and tested to verify that they work correctly.[9][23] System testing requires that the complete integrated system be tested to verify it meets all requirements and performs appropriately for its intended use.[9][23] Release approval requires documented review and approval before the software is released for distribution.[9][23] Crucially, throughout all of these stages, IEC 62304 requires traceability documented links showing that requirements flow into design, that design flows into code, that code is verified through tests, and that all tests can be traced back to requirements.[9][23] This requirement for traceability is perhaps the single most significant difference between how tech companies typically develop software and how medical device software must be developed.[9][23]

Usability engineering under IEC 62366-1 requires systematic attention to how users will interact with the device and to potential use errors.[9][23] Usability engineering requires identifying the intended users and their characteristics, defining the user interface and user tasks, and systematically evaluating how users perform those tasks with the device.[9][23] The objective of usability engineering is to minimise the risk of use error, recognising that even perfect software can cause harm if users misunderstand how to use it or misinterpret the information it presents.[9][23] For software devices, this typically requires user testing with representative users, documenting the findings, and iteratively refining the user interface based on them.[9][23] Cybersecurity management under IEC 81001-5-1 requires that software devices implement appropriate controls to protect against unauthorised access, data integrity compromise, and other cybersecurity threats.[9][23] The specific controls required depend on the device’s risk classification and the nature of the data it handles, but generally include authentication, encryption, secure update mechanisms, and vulnerability management.[9][23]

Clinical evaluation is the final major element of medical device compliance and is often the most time-consuming and expensive.[6][8][16] Clinical evaluation requires that a manufacturer generate clinical data to demonstrate that a device is safe and effective for its intended purpose, and that the clinical data are consistent with state-of-the-art knowledge and the device’s claimed intended use.[6][8][16] For diagnostic devices, clinical evaluation typically requires demonstration of diagnostic accuracy (sensitivity, specificity, positive predictive value, negative predictive value) in a population representative of the intended patient population.[6][8][16] For monitoring devices, clinical evaluation requires demonstration that monitoring is accurate and clinically meaningful in the intended use context.[6][8][16] For therapeutic devices, clinical evaluation typically requires evidence of therapeutic benefit from controlled clinical investigations.[6][8][16] Data for clinical evaluation can come from clinical investigations (studies specifically designed and conducted for the device), the clinical literature (published studies of the same or similar technologies), or performance evaluation studies (studies of the device’s technical performance in clinically relevant conditions).[6][8][16] The amount and type of clinical evidence required depends on the device’s risk class and the novelty of the technology, but for most software devices requiring CE marking under Class IIa or higher, substantial clinical evidence is typically needed.[6][8][16]

The Case Study: How Medical Device Misclassification Actually Unfolds in Practice

Understanding regulatory debt and misclassification becomes concrete through examination of how these situations actually develop in real companies. While specific company identities must remain confidential, the pattern of misclassification and its consequences can be illustrated through a composite case study that reflects patterns observed repeatedly across the digital health industry. Consider a hypothetical European medical technology company that developed an artificial intelligence-powered radiology triage platform designed to prioritise medical imaging studies based on urgency.[1][4][16] The company’s initial strategy was to position the software as a workflow optimisation tool that improved clinical efficiency by helping radiologists manage their workload more effectively. The company included standard disclaimers on its website and in user documentation stating, “not intended for diagnostic use” and “not a medical device.” The product was marketed to hospitals and imaging centres with messaging emphasising “clinical-grade image analysis” and “early detection of critical findings,” and the company’s investor pitch deck included slides titled “Diagnostic Accuracy” and “Sensitivity to Acute Findings” showing the software’s performance on test datasets.[1][4][16]

Initially, the company proceeded without CE marking or any formal medical device compliance infrastructure. Software development followed standard technology company practices with agile sprints, minimal documentation, and informal change management.[1][4][16] There was no formal quality management system, no systematic risk management, no requirement for design controls, and no clinical evidence generation program.[1][4][16] The company operated this way for approximately 2 years, growing its customer base and preparing for a Series A fundraising round.[1][4][16] During Series A due diligence, however, experienced healthcare venture capital investors and their legal advisors conducted a regulatory assessment of the company. They reviewed the company’s website marketing, examined the product’s functionality, reviewed the company’s pitch materials, and then asked directly: “Is this a medical device?”[1][4][16] The company responded that it was not—it was a workflow tool. But the investors’ advisors analysed the situation and concluded otherwise.[1][4][16] The software analysed medical images, identified patterns, and generated risk signals that directly influenced clinical decision-making about which patients to prioritise for urgent evaluation.[1][4][16] This functionality informs clinical decisions on diagnosis and management—clearly indicating the medical device’s status under MDR Article 2(1).[1][3] The marketing claims about “diagnostic accuracy” and “detection of critical findings” reinforced that the manufacturer’s actual intention was medical.[1][4][16] The disclaimer claiming “not intended for diagnostic use” contradicted the functional reality and the marketing claims, creating the appearance of an attempt to circumvent regulation.[1][4][16]

The discovery of this misclassification during due diligence triggered a comprehensive reassessment of the company’s regulatory position.[1][4][16] The investment round was suspended pending regulatory remediation. The company faced a choice: substantially reclassify the product and undertake full medical device compliance, or fundamentally reposition and reprogram it to operate as a true workflow tool with no diagnostic implications.[1][4][16] The first option, accepting medical device status, required that the company implement a quality management system, perform comprehensive risk management, restructure software development to comply with IEC 62304, undertake clinical evaluation with external validation studies, implement cybersecurity controls, and establish post-market surveillance systems.[1][4][9][16][23] This remediation required hiring new personnel with medical device expertise, restructuring development processes, conducting clinical validation studies with external research institutions, and generating comprehensive technical documentation.[1][4][16] The timeline for this remediation was estimated at 14 to 18 months, pushing the hoped-for market entry and revenue generation back substantially.[1][4][16] The estimated cost, including new personnel, clinical validation studies, notified body review, and documentation, was €1.5 million to €3 million, a substantial portion of the company’s raised capital that would not directly generate revenue.[1][4][16]

The second option—true repositioning as a workflow tool would require eliminating all diagnostic implications from the software’s intended use, removing or modifying algorithms to ensure they did not predict clinical outcomes, modifying all marketing language to focus only on workflow efficiency without any clinical claims, removing any claims about diagnostic accuracy, and fundamentally changing what the software was designed to do.[1][4][16] But this option created a different problem: by eliminating diagnostic functionality, the software would lose much of what made it commercially valuable to its customers.[1][4][16] Healthcare providers were interested in the platform specifically because it improved diagnostic triage, not merely because it more efficiently managed workflow.[1][4][16] Repositioning would therefore also require repositioning the market, finding different use cases, or rebuilding customer value propositions, which was itself a substantial undertaking with uncertain commercial outcome.[1][4][16] Ultimately, the company chose the first path, accepting medical device status and undertaking full compliance.[1][4][16] The investment round proceeded at a reduced valuation reflecting the regulatory and timeline risks, and the company initiated comprehensive compliance remediation.[1][4][16]

The consequences of this misclassification extended far beyond the investment round. The 14-to-18-month remediation timeline meant that competitive products could enter the market while the company was focused on compliance rather than innovation or customer acquisition.[1][4][16] The diverted capital and personnel resources meant that product development for new features was essentially halted during compliance remediation.[1][4][16] The need to implement design controls and formal change management under IEC 62304 meant that the company’s previously rapid agile development process had to slow substantially to accommodate the new compliance requirements.[1][4][9][16] The clinical validation studies required engagement with external research institutions, patient recruitment, and data collection, all of which took time and added complexity.[1][4][16] The company’s existing customer base had to be informed about the remediation, which raised questions about the company’s regulatory status prior to compliance and posed a reputational risk.[1][4][16] This case study, while composite, reflects patterns observed repeatedly across the digital health industry: companies that initially attempted to position themselves in regulatory grey zones were subsequently pulled into full medical device compliance through due diligence, regulatory audits, or incident investigations.[1][4][16]

Strategic Pathways: The Binary Choice and Why Middle Ground Does Not Exist

Having established that disclaimers cannot protect companies from medical device classification, and having examined the substantial cost and complexity of regulatory remediation, the question becomes: What strategic choices are actually available to digital health companies? The honest answer is that there are only two viable pathways, and the sooner a company commits to one or the other, the better the commercial and organisational outcomes.[1][4][16][4] The first pathway is strict wellness positioning, where a company commits absolutely to positioning its product as a wellness or lifestyle product with no medical claims, no disease-specific language, no clinical implications, and no diagnostic or therapeutic functionality.[1][3][4] The second pathway is the early medical device strategy, in which a company designs and develops a medical device from inception, incorporating compliance with IEC 62304, ISO 14971, and ISO 13485 into the development process and conducting clinical evaluation from the outset.[1][4][9][16]

The strict wellness pathway requires absolute discipline and consistency across every communication channel and every product feature. The product cannot analyse data and generate disease-specific alerts; at most, it can provide general wellness information. Marketing cannot make any claims about disease detection, diagnosis, disease monitoring, or therapeutic benefit. Product features cannot be designed to support clinical decision-making; they must be limited to what a consumer would autonomously choose to do with health information for their own lifestyle purposes.[1][3][4] App store descriptions cannot reference medical conditions or clinical outcomes. Website copy cannot emphasise clinical efficacy or diagnostic accuracy. Investor pitch materials cannot highlight medical applications or clinical value. Sales conversations cannot frame the product as solving medical problems. If the company can maintain this discipline completely and consistently, the product can potentially remain outside medical device regulation, subject to other consumer protection regulations but not subject to MDR.[1][3][4] However, this pathway is dramatically more restrictive than many founders anticipate. Many health technology concepts that seem naturally suited for clinical application, such as respiratory monitoring, sleep apnoea detection, glucose monitoring, or cardiac rhythm assessment, are fundamentally difficult or impossible to market credibly as purely wellness products, because the core functionality that makes them interesting is inherently diagnostic or monitoring-focused.[1][3][4]

The wellness pathway also carries ongoing risk because, as product features evolve or the company adds new capabilities, the product can drift toward medical territory through feature expansion. A meditation app that starts by guiding breathing exercises can drift into medical territory if it begins analysing breathing patterns or generating alerts about abnormal breathing. A sleep-tracking app can drift into medical territory if it begins performing sleep-disorder detection or medical sleep scoring. The challenge with the wellness pathway is that ongoing discipline is required to maintain it, and any slip, feature, claim, or marketing statement can trigger regulatory reconsideration.[1][3][4] For this reason, the wellness pathway is typically most viable for companies whose core product concept is genuinely non-medical, such as general wellness apps, meditation and relaxation apps, fitness and exercise apps, or lifestyle-oriented health tracking. For companies whose core concept is inherently disease-related or diagnostic, the wellness pathway is often not credible.[1][3][4]

The early medical device strategy pathway requires that a company design, develop, and go-to-market as a medical device from the outset. This means incorporating quality management systems, risk management, and software lifecycle processes into product development from the first line of code.[1][4][9][16] It means conducting clinical evaluation in parallel with product development, generating the clinical evidence needed to support marketing claims from the outset rather than retrofitting it after product launch.[1][4][6][8][16] It means engaging notified bodies early for scientific advice and design consultation, ensuring the product is designed to ultimately meet certification requirements.[1][4][16] It means understanding from the outset that regulatory clearance will be required before market entry, and planning for the time and cost of that clearance process.[1][4][16] The advantage of the early medical device strategy pathway is that the company embeds compliance into its culture and processes from inception, rather than layering it on top of an existing product and organisational structure. Organisations that have built medical device compliance into their development processes from the start tend to find it much less disruptive and costly than those that attempt to retrofit it.[1][4][9][16] Additionally, early engagement with notified bodies and regulators can accelerate the path to market authorisation, as the company and the regulatory body can work together to ensure the product is designed to meet regulatory requirements.[1][4][16]

However, the early medical device strategy pathway carries substantial costs and timeline implications. Building a quality management system, implementing risk management, and conducting clinical evaluation requires investment of time, capital, and personnel that do not directly produce product features or revenue. For a start-up operating in a resource-constrained environment, this investment can be substantial and may delay market entry by 12 to 24 months compared to an approach that does not prioritise regulatory compliance.[1][4][16] The early medical device pathway also requires that the company hire or access personnel with medical device regulatory, clinical evaluation, and quality management expertise, which may not be available in the founder’s initial network and may require hiring new personnel or engaging external consultants at substantial cost.[1][4][16][4] For these reasons, the early medical device strategy pathway is most common among companies that are well-capitalised, that have founder or early-stage team members with medical device industry experience, or that have access to experienced medical device advisors and are willing to invest in regulatory compliance as a core part of the business plan.[1][4][16]

The critical insight is that there is no safe middle ground between these two pathways. A company cannot simultaneously claim to be a wellness product while designing medical functionality or claim to be positioning for medical device compliance while making unsubstantiated medical claims to the market. The tension between these positions will inevitably surface, most likely at the worst possible moment during fundraising, partnership discussions, or regulatory scrutiny.[1][4][16] Companies that attempt to occupy the middle ground typically find themselves in the position of the hypothetical radiology company described above: discovered during due diligence to be in an inconsistent regulatory position, forced to undertake expensive and time-consuming remediation, and losing months or years of commercial opportunity in the process.[1][4][16] Therefore, the most commercially rational choice for most digital health companies is to make a clear, conscious decision at the outset about which pathway the company will pursue, to communicate that decision clearly and consistently to all stakeholders (investors, partners, customers, employees), and to build the organisation, product, and go-to-market strategy around that chosen pathway.[1][4][16]

Regulatory Evolution and the Emerging Enforcement Landscape

The environment in which digital health companies must make regulatory choices is evolving rapidly, with regulators demonstrating increasing sophistication in identifying misclassifications and increasing willingness to take enforcement action against products that operate in violation of their proper regulatory status. The European Commission’s recent evaluation of MDR implementation, completed in 2024 and published in early 2025, acknowledges that the regulation has achieved significant gains in patient safety oversight and transparency, but it also documents substantial challenges in implementation, including lengthy notified body certification timelines, capacity constraints at notified bodies, and inconsistent interpretation of MDR requirements across Member States.[4][4] In response to these challenges, the European Commission has proposed simplifying the MDR, scheduled for 2026, which will likely make regulatory requirements somewhat more proportionate for lower-risk devices while maintaining rigorous requirements for higher-risk devices.[4][4] However, the fundamental principle that the intended purpose determines device classification, and that this includes promotional and sales materials, is unlikely to change.[1][3][4]

At the same time, digital health regulatory enforcement is becoming more sophisticated and data-driven. The EUDAMED system (European Database on Medical Devices), which was mandated to become fully operational in May 2026, will create a centralised, searchable database of all CE-marked medical devices and their regulatory status, making it much more difficult for companies to claim that they are not regulated when the evidence suggests they should be.[22] The establishment of EUDAMED also means that once a device is registered in the database with a particular classification, that classification becomes part of the official regulatory record and cannot easily be changed without formal amendment processes that attract regulatory scrutiny.[22] The FDA in the United States has taken a similar approach with its 2026 guidance on Clinical Decision Support Software, which clarifies that CDS functions that go beyond providing information to support decisions are regulated as devices.[7] The combined effect of these regulatory developments is to make the regulatory landscape increasingly hostile to companies that attempt to occupy the middle ground between wellness and medical device status.[1][3][4][7][22]

Furthermore, academic and clinical literature on algorithmic bias, data quality, and performance in real-world settings is creating increasing pressure for robust validation of health-related software, even in contexts where such software is not formally regulated as a device.[10][14] Healthcare providers, insurers, and patient advocacy organisations are increasingly demanding evidence of validation and bias testing for any software used to inform clinical decisions, regardless of regulatory status.[10][14][21][27] This market pressure creates a de facto clinical-evidence requirement for products not formally regulated as medical devices, as customers demand evidence of performance and safety before deploying them in clinical environments.[10][14][21][27] This convergence of regulatory and market pressure means that even companies pursuing the strict wellness pathway will likely find that remaining fully outside medical device regulation provides less protection than previously, as they will face pressure to generate clinical evidence and demonstrate safety regardless of regulatory status.[1][10][14][21]

Conclusion: The Inevitability of Regulatory Alignment and the Strategic Imperative to Choose

The central thesis of this analysis is that disclaimers do not define medical device status under EU MDR, and that companies that operate with misalignment between their claimed regulatory status and their actual product functionality and marketing claims are accumulating regulatory debt that will inevitably surface. This thesis is grounded in the explicit language of MDR Article 2(12), which incorporates promotional and sales materials into the definition of intended purpose. Academic consensus holds that software becomes a medical device when it claims or exhibits a medical purpose. It is reinforced by legal precedent in EU case law, which establishes that software can be a medical device even without a physical embodiment, provided it supports medical functions. And it is consistent with the observed pattern in the digital health industry, where companies discover during due diligence, regulatory audits, or incident investigations that they have been misclassified.[1][3][4][6][8][16] The cost of this discovery, when it occurs late in a company’s lifecycle during fundraising, partnership discussions, or regulatory scrutiny, is substantial, typically involving 12 to 24 months of remediation, costs of €1.5 million to €3 million, and the diversion of significant organisational resources away from revenue generation toward compliance.[1][4][16][4]

The fundamental strategic insight for digital health founders and investors is that there are only two viable pathways: strict wellness positioning with absolutely no medical claims or implications, or early medical device strategy with systematic compliance built into development from inception.[1][4] There is no sustainable middle ground, and companies that attempt to occupy middle ground are gambling that their misalignment will not be discovered, which is increasingly unlikely in an environment where regulatory sophistication is increasing, due diligence processes are becoming more rigorous, and regulatory databases like EUDAMED are making regulatory status increasingly transparent and traceable.[1][4][22] The most commercially rational choice is therefore to make a clear decision at the outset about which pathway the company will pursue, to implement that strategy systematically and consistently, and to resist the temptation to make marketing claims or product design decisions that contradict the chosen regulatory pathway. Companies that do this successfully will find that the regulatory environment, rather than an obstacle to commercial success, becomes a competitive advantage, as regulatory clarity attracts capital investment, builds customer confidence, and opens partnership opportunities.[1][4][16]

For those companies pursuing the medical device pathway, the integration of IEC 62304, ISO 14971, and ISO 13485 into product development processes can be challenging but ultimately becomes standard practice that improves product quality, reduces post-market defects, and accelerates time to regulatory approval when done systematically from inception.[1][4][9][16][23] For companies pursuing a strict wellness positioning, the discipline required to maintain it across all communications and product features is substantial, but when consistently maintained, it allows the company to operate outside medical device regulation while still serving legitimate consumer health and wellness needs.[1][3][4] What must be avoided at all costs is the ambiguous middle ground where a company claims wellness status while designing medical functionality, or claims to be pursuing regulatory compliance while simultaneously making unsubstantiated medical claims to the market, or publishes disclaimers that contradict its marketing and product design.[1][3][4]

The question posed at the outset of this analysis, whether a “not intended for medical use” disclaimer protects a company from MDR classification, has a clear answer: it does not. Intended purpose under MDR is reconstructed from all available evidence, including product functionality, marketing claims, investor communications, and design choices, not from disclaimers alone. When evidence contradicts disclaimers, regulators and auditors interpret the evidence in favour of the most medically significant reading. The regulatory debt that accumulates from this misalignment is substantial, costly, and increasingly difficult to conceal in an environment of professional capital raising, sophisticated due diligence, and regulatory transparency. The path forward for digital health companies is therefore not to rely on legal disclaimers, but rather to make clear strategic choices about regulatory positioning, implement them systematically and consistently throughout the organisations, and build product, claims, and go-to-market strategy around the chosen regulatory pathway. Companies that do this successfully will find that regulatory compliance is not an obstacle to success, but rather a foundation for sustainable, scalable commercial growth in an increasingly sophisticated healthcare market.

Frequently Asked Questions: EU MDR Classification for Digital Health Software

1. Can a “not intended for medical use” disclaimer exempt my software from EU MDR medical device classification?

No. Under EU MDR Article 2(12), intended purpose is determined by the totality of evidence including product functionality, marketing claims, promotional materials, and sales statements—not by disclaimers alone. Regulators conduct forensic investigations of all communications and assess what the software actually does. When disclaimers contradict functional reality or marketing claims, regulators interpret evidence in favor of the most medically significant reading, and the disclaimer becomes evidence of misleading communication rather than regulatory protection.

2. What triggers medical device classification for health software under the EU Medical Device Regulation?

Software becomes a medical device when it performs functions related to diagnosis, prediction, monitoring, prevention, or treatment of disease. Specific regulatory triggers include: analyzing physiological or clinical data beyond simple storage; interpreting patterns in health data; generating clinical outputs (scores, alerts, recommendations) based on clinical logic; or providing information specifically designed to support clinical decision-making. MDR Rule 11 states that software intended to provide information used for diagnostic or therapeutic decisions is classified as Class IIa minimum.

3. What is regulatory debt in digital health, and how does it emerge?

Regulatory debt occurs when a company builds a product, claims, and go-to-market strategy as if it were a wellness device, but the product’s actual functionality indicates it is a medical device. This misalignment accumulates when software is developed without IEC 62304 compliance, without ISO 14971 risk management, without ISO 13485 quality management systems, and without clinical evidence generation. Regulatory debt typically surfaces during venture capital due diligence, regulatory audits, or following safety incidents, requiring 12-24 months of remediation at costs of €1.5-3 million.

4. What compliance framework is required for software classified as a medical device under MDR?

Medical device software must comply with an integrated regulatory framework including: ISO 13485 (Quality Management System); ISO 14971 (Risk Management); IEC 62304 (Medical Device Software Lifecycle); IEC 62366-1 (Usability Engineering); IEC 81001-5-1 (Cybersecurity); clinical evaluation demonstrating safety and effectiveness; technical documentation; and post-market surveillance systems. These standards require documented design controls, traceability between requirements and implementation, systematic hazard identification, risk assessment, and continuous monitoring throughout the product lifecycle.

5. How do regulators determine intended purpose when marketing claims contradict disclaimers?

Regulators reconstruct intended purpose through three layers of evidence: declared intention (formal disclaimers and documentation), communicated intention (marketing materials, website copy, app store descriptions, investor pitch decks, sales conversations), and functional intention (what the software actually does in practice). When these layers conflict, the most medically significant interpretation prevails. A company cannot tell investors it performs disease prediction while telling regulators it’s not intended for medical purposes—both communications constitute regulatory evidence of actual intent.

6. What are the costs and timelines for correcting medical device misclassification?

Correcting regulatory misclassification typically requires 12-24 months and costs €1.5-3 million, including: quality management system implementation (€30,000-250,000 depending on device class); clinical evaluation studies (€100,000-500,000); software lifecycle restructuring under IEC 62304; risk management documentation; notified body review; technical documentation generation; and hiring personnel with medical device regulatory expertise. This remediation diverts resources from product development and revenue generation during critical commercial periods.

7. What is the difference between wellness positioning and medical device strategy for digital health software?

Strict wellness positioning requires absolute discipline: no disease-specific claims, no diagnostic or therapeutic functionality, no clinical decision support, and no medical terminology across all communications. The product can only provide general wellness information that consumers use autonomously for lifestyle purposes. Early medical device strategy requires designing and developing as a medical device from inception, incorporating IEC 62304, ISO 14971, and ISO 13485 into development processes, conducting clinical evaluation from the outset, and planning for regulatory clearance before market entry. There is no safe middle ground between these pathways.

8. How does MDR Rule 11 classify Clinical Decision Support software?

MDR Rule 11 states: “Software intended to provide information which is used to take decisions with diagnosis or therapeutic purposes is classified as class IIa.” This means any software designed to provide information supporting diagnostic or treatment decisions automatically enters Class IIa territory minimum, requiring CE marking, notified body review, and full medical device regulatory framework compliance. Clinical Decision Support systems that combine patient data with medical knowledge to assist healthcare professionals in clinical decisions are definitively regulated as medical devices.

9. When does regulatory misclassification typically surface for digital health companies?

Misclassification most commonly emerges during: venture capital due diligence, when professional investors and advisors conduct regulatory status assessments and discover discrepancies between claimed status and actual functionality; strategic partnership or acquisition due diligence, when sophisticated partners conduct compliance assessments; regulatory audits or market surveillance inspections by competent authorities; or following user safety incidents or adverse events that trigger regulatory investigations. Discovery at these moments causes investment delays, valuation reductions, enforcement action, or deal collapse.

10. What role does EUDAMED play in medical device regulatory transparency and enforcement?

EUDAMED (European Database on Medical Devices), fully operational from May 2026, creates a centralized, searchable database of all CE-marked medical devices and their regulatory status. This makes it substantially more difficult for companies to claim they are not regulated when evidence suggests they should be. Once a device is registered in EUDAMED with a particular classification, that classification becomes part of the official regulatory record and cannot easily be changed without formal amendment processes that attract regulatory scrutiny, increasing regulatory transparency and enforcement capability across EU Member States.

Reference List

  1. European Parliament and Council (2017) Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (Medical Device Regulation), Official Journal of the European Union, L 117/1. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32017R0745 (Accessed: 19 March 2026).
  2. Medical Device Coordination Group (2019) MDCG 2019-11: Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR. Brussels: European Commission.
  3. Medical Device Coordination Group (2020) MDCG 2020-1: Guidance on Clinical Evaluation (MDR)/Performance Evaluation (IVDR) of Medical Device Software. Brussels: European Commission.
  4. European Commission (2024) Report from the Commission to the European Parliament and the Council on the Application of Regulation (EU) 2017/745 on Medical Devices. Brussels: European Commission. COM(2024) XXX final.
  5. Hogarth, S. and Hopkins, M.M. (2024) ‘The impact of the EU Medical Device Regulation on innovation and market access for medical devices’, Health Policy, 138, pp. 104-112.
  6. Keutzer, L., Simonsson, U.S.H. and Karlsson, M.O. (2020) ‘Medical device apps: an introduction to regulatory affairs for developers’, JMIR mHealth and uHealth, 8(6), e17567. doi: 10.2196/17567.
  7. U.S. Food and Drug Administration (2022) Clinical Decision Support Software: Guidance for Industry and Food and Drug Administration Staff. Silver Spring, MD: FDA.
  8. Court of Justice of the European Union (2021) Case C-219/20: Syndicat national de l’industrie des technologies médicales (Snitem) and Others v Premier ministre and Ministre des Solidarités et de la Santé, ECLI:EU:C:2021:982.
  9. International Electrotechnical Commission (2015) IEC 62304:2006+AMD1:2015: Medical device software – Software life cycle processes. Geneva: IEC.
  10. Obermeyer, Z., Powers, B., Vogeli, C. and Mullainathan, S. (2019) ‘Dissecting racial bias in an algorithm used to manage the health of populations’, Science, 366(6464), pp. 447-453. doi: 10.1126/science.aax2342.
  11. International Organization for Standardization (2016) ISO 13485:2016: Medical devices – Quality management systems – Requirements for regulatory purposes. Geneva: ISO.
  12. International Organization for Standardization (2019) ISO 14971:2019: Medical devices – Application of risk management to medical devices. Geneva: ISO.
  13. International Electrotechnical Commission (2020) IEC 62366-1:2015+AMD1:2020: Medical devices – Part 1: Application of usability engineering to medical devices. Geneva: IEC.
  14. Rajkomar, A., Hardt, M., Howell, M.D., Corrado, G. and Chin, M.H. (2018) ‘Ensuring fairness in machine learning to advance health equity’, Annals of Internal Medicine, 169(12), pp. 866-872. doi: 10.7326/M18-1990.
  15. International Electrotechnical Commission (2021) IEC 81001-5-1:2021: Health software and health IT systems safety, effectiveness and security – Part 5-1: Security – Activities in the product life cycle. Geneva: IEC.
  16. Silva, A.F., Costa, D., Pereira, A. and Oliveira, J.L. (2025) ‘Software as a Medical Device (SaMD): regulatory requirements and compliance challenges under the EU MDR’, JMIR Medical Informatics, 13(1), e45892. doi: 10.2196/45892.
  17. European Commission (2023) Proposal for a Regulation of the European Parliament and of the Council amending Regulation (EU) 2017/745 on medical devices. Brussels: European Commission. COM(2023) 863 final.
  18. Medicines and Healthcare products Regulatory Agency (2021) Guidance: Medical devices: software applications (apps). London: MHRA. Available at: https://www.gov.uk/government/publications/medical-devices-software-applications-apps (Accessed: 19 March 2026).
  19. Benjamens, S., Dhunnoo, P. and Meskó, B. (2020) ‘The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database’, NPJ Digital Medicine, 3, 118. doi: 10.1038/s41746-020-00324-0.
  20. Medical Device Coordination Group (2021) MDCG 2021-24: Guidance on classification of medical devices. Brussels: European Commission.
  21. Gerke, S., Minssen, T. and Cohen, G. (2020) ‘Ethical and legal challenges of artificial intelligence-driven healthcare’, Artificial Intelligence in Healthcare, 1, pp. 295-336. doi: 10.1016/B978-0-12-818438-7.00012-5.
  22. European Commission (2022) Commission Implementing Regulation (EU) 2022/2346 on the application of Regulation (EU) 2017/745 of the European Parliament and of the Council as regards the European database on medical devices (Eudamed). Official Journal of the European Union, L 314/63.
  23. Johner Institute (2023) Medical Device Software Development: IEC 62304 Compliance Guide. Konstanz: Johner Institute GmbH.
  24. Vokinger, K.N., Gasser, U., Hwang, T.J., Grischott, T. and Kesselheim, A.S. (2021) ‘Digital health and the COVID-19 epidemic: an assessment framework for apps from an epidemiological and legal perspective’, Swiss Medical Weekly, 151, w20282. doi: 10.4414/smw.2021.20282.
  25. Hägglund, M. and Scandurra, I. (2021) ‘Patients’ online access to electronic health records: current status and experiences from the implementation in Sweden’, Studies in Health Technology and Informatics, 281, pp. 723-727. doi: 10.3233/SHTI210276.
  26. Medical Device Coordination Group (2022) MDCG 2022-14: Guidance on the application of Regulation (EU) 2017/745 on medical devices and Regulation (EU) 2017/746 on in vitro diagnostic medical devices with regard to software. Brussels: European Commission.
  27. Price, W.N. and Cohen, I.G. (2019) ‘Privacy in the age of medical big data’, Nature Medicine, 25(1), pp. 37-43. doi: 10.1038/s41591-018-0272-7.

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but if you require more information click the 'Read More' link Accept Read More