Research has found that nearly a third of Common Vulnerabilities and Exposures (CVEs) entries may be unsubstantiated, highlighting significant flaws in the metrics used by organisations to assess application security.

Dr. Aram Hovsepyan’s review of academic studies and industry data suggests that CVEs, widely considered the reference point for cybersecurity risk, are often inconsistent and lack scientific rigour. The findings raise concerns about the reliance on these identifiers and associated severity scores in security programmes and board-level reporting.

A third disputed

According to the analysis, one-third of CVEs are either disputed or unverified. This raises questions about the reliability of dashboards and automated risk measurement systems built on these datasets. Dr. Hovsepyan’s research states that security teams are “leaning far too heavily on a system that is inconsistent, subjective, and scientifically flawed.”

The Common Vulnerability Scoring System (CVSS), which attaches a severity rating to each vulnerability, also faces criticism within the report. While CVSS attempts to classify impact, it does so inconsistently and is founded on ordinal numbers, which are not mathematically suited for the quantitative analysis typically applied by security tools.

“CVSS is not at risk. It measures impact only, often inconsistently. CVSS scores are based on ordinal numbers; you can’t do arithmetics with ordinal numbers,” the analysis concludes.

Incentives and context

Analysis points to misaligned incentives within the reporting ecosystem. Researchers gain prestige by publishing new CVEs, while vendors or CVE Numbering Authorities of Last Resort (CNA-LRs), who oversee products without a dedicated authority, often avoid rejecting CVE submissions in order not to miss genuine vulnerabilities. This dynamic results in the publication of numerous CVEs that may describe low-impact bugs rather than real security threats.

Dr. Hovsepyan called for a more contextual, risk-driven approach: “Security requires context. True AppSec maturity comes from risk-driven prioritization, not chasing numbers.”

The report presents several examples where CVEs have been assigned to negligible or irrelevant issues. These include a case where a German PhD student secured a CVE with a 9.1 CVSS score for a deprecated system no longer in use, and an instance in which the networking tool curl received a 9.8 CVSS score for a parameter bug that did not represent a security threat. The intentionally insecure OWASP DVWA application was also cited as being awarded CVEs.

“These examples are not rare anomalies,” Dr. Hovsepyan explained. “They show a systemic problem: our primary mechanism for measuring vulnerabilities is producing noise that organisations mistake for signals.”

Disputed designations

The report also critiques the process for disputing CVEs. Once entered into the database, vulnerabilities that are questioned by maintainers are rarely removed. Instead, after a protracted process, they may be labelled “disputed” by MITRE, the organisation responsible for managing the CVE process. In many cases, maintainers opt to fix the reported bug rather than spend resources on a complex administrative challenge.

This means that a sizeable number of CVEs remain in circulation despite lacking a clear security risk, influencing security metrics and organisational priorities long after their publication.

Question marks over scoring

Dr. Hovsepyan’s critique of CVSS notes that the system’s ordinal metrics can result in inconsistent scoring, with the same vulnerability sometimes receiving different scores from the same or different assessors over time. Duplicate vulnerabilities have been recorded as separate CVEs with significantly different severity ratings.

The research also notes that CVSS quantifies only the potential impact, not the likelihood of exploitation, departing from standard risk analysis principles. “If likelihood is zero, the risk is zero, no matter the impact,” the report emphasises.

Organisational implications

Referencing award-winning research presented at a major security symposium, Dr. Hovsepyan stated that around 33% of reviewed CVEs were unconfirmed or contested by product maintainers. Most originated from CNA-LRs, again underscoring variability in reporting standards.

Security teams invest considerable resources in addressing reported CVEs and tracking progress based on dashboards that may not accurately reflect risk. Dr. Hovsepyan observes that attackers are indifferent to CVSS statistics, instead seeking vulnerabilities that provide practical opportunities for exploitation.

“Attackers don’t look at your CVSS scores,” Dr. Hovsepyan concluded. “They look for weaknesses that matter. Our defences should do the same.”

Moving forward

The research suggests there is a need for a re-evaluation of how organisations measure and report application security. Dr. Hovsepyan calls for a more scientific and risk-based methodology, placing the wider context and threat modelling at the centre of vulnerability management.

“CVEs and CVSS aren’t useless,” he clarified. “They’re valuable inputs. But they should never be the foundation of an entire AppSec strategy. We need to start with a shared understanding of risk, grounded in threat modelling and contextual triage. Vulnerability dashboards can help, but only when interpreted through a scientific lens.”