Skip to content

Complex Evaluate

Python Version PyPI - Version Tests codecov License

A Python library for evaluating complex ontology alignments in EDOAL (Expressive and Declarative Ontology Alignment Language) format adapting precision, recall, and f-measure metrics to the complex matching case.

Highlights

  • Evaluate EDOAL alignments from files or in-memory strings.
  • Weighted precision/recall for simple vs. complex mappings.
  • Built on an unordered tree edit distance similarity measure.

Quickstart

Install the library:

pip install complex-evaluate

Evaluate alignments from EDOAL files:

from complex_evaluate.evaluate import evaluate_edoal

precision, recall, f_measure = evaluate_edoal(
    "predicted_alignment.edoal",
    "reference_alignment.edoal",
)

print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
print(f"F-measure: {f_measure:.3f}")

Or evaluate directly from EDOAL strings:

from complex_evaluate.evaluate import evaluate_edoal_string

predicted = '''<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" 
         xmlns="http://knowledgeweb.semanticweb.org/heterogeneity/alignment#">
  <Alignment>
    <map>
      <Cell>
        <entity1>
          <Class rdf:about="http://example.org#ClassA" />
        </entity1>
        <entity2>
          <Class rdf:about="http://example.org#ClassB" />
        </entity2>
      </Cell>
    </map>
  </Alignment>
</rdf:RDF>'''

reference = predicted  # Use same for identity test

p, r, f = evaluate_edoal_string(predicted, reference)
print(f"F-measure: {f}")  # Should be 1.0 for identical alignments

What next