Charity Journal Sustainability Scoring Methodology

How We Score Sustainability

Our sustainability scores are independent editorial assessments built on publicly available data, third-party audits, and sector benchmarks — giving donors, partners, and the public a clear, consistent picture of how organisations are really performing.

Last Updated April 2026
Review Cycle Annual
Organisations Benchmarked 180+ INGOs
Score Categories Up to 6 per article

What Is the Charity Journal Sustainability Score?

Every article in our Sustainability section that covers a specific organisation or programme carries an independent score — a structured assessment of how well that entity is performing across up to four key sustainability dimensions. Scores are not endorsements or rankings; they are editorial tools designed to help readers engage critically with the organisations shaping the social sector.

Scores range from 0 to 100 and are translated into letter grades (A+ through F) for quick reference. Each score is assembled by our editorial team from a defined set of data signals, weighted by category. We do not accept payment from organisations to influence scores, and organisations are notified when a report covering them is published.

Important: Sustainability scores reflect publicly available information at the time of publication. They are not audits. Charity Journal is an independent media publication, not a regulatory or accreditation body. Scores should be read alongside the full article.

What We Measure

Each article may carry scores across up to four subcategories, selected based on what is most relevant to the organisation or programme being assessed. Not all subcategories appear in every article — blank categories are excluded from the overall score calculation.

🌍

Climate Action

Measures the organisation's commitment to environmental sustainability, emission reduction, and climate-resilient programming.

Up to 100 pts
Key signals

Carbon footprint data, climate programme delivery rates, net-zero commitments, supply chain sustainability, environmental policy documentation.

⚖️

Gender Inclusion

Assesses gender equity in leadership, programme design, pay, and beneficiary reach across the organisation's operations.

Up to 100 pts
Key signals

Women in leadership %, gender pay gap data, gender-disaggregated programme data, inclusion policies, safeguarding frameworks.

🏛️

Governance

Evaluates board structure, transparency, accountability mechanisms, and compliance with sector governance standards.

Up to 100 pts
Key signals

Board diversity, published governance frameworks, whistleblower policies, regulatory compliance, trustee independence.

📊

Financial Accountability

Reviews the quality, transparency, and timeliness of financial reporting, overhead ratios, and audit compliance.

Up to 100 pts
Key signals

Audited accounts, overhead vs programme spend ratio, on-time filings, financial reserve policies, donor fund usage transparency.

🤝

Community Inclusion

Measures meaningful local participation in programme design, delivery, and evaluation — moving beyond top-down aid models.

Up to 100 pts
Key signals

Local partnership ratios, community feedback mechanisms, beneficiary involvement in design, localisation commitments.

🎯

SDG Alignment

Assesses how clearly and measurably the organisation's work maps to the UN Sustainable Development Goals.

Up to 100 pts
Key signals

SDG reporting in annual reports, outcome indicators tied to SDG targets, SDG integration in strategic plans, progress measurement.

How Scores Become Grades

Overall and subcategory scores (0–100) are converted to letter grades for quick readability. The overall score is the average of all filled subcategory scores unless our editorial team applies a manual override — for example, when a single critical failure (such as a financial scandal) warrants a lower overall grade than the average suggests.

Grade Score Range What It Means
A+ / A / A− 85 – 100 Sector-leading performance. Transparent, measurable, and well-documented across all assessed dimensions.
B+ / B / B− 70 – 84 Strong performance with some gaps. Meets most sector standards with room for improvement in specific areas.
C+ / C / C− 55 – 69 Average or inconsistent performance. Notable gaps in transparency, delivery, or accountability that warrant attention.
D+ / D 40 – 54 Below sector standards. Significant concerns identified in publicly available data or reporting.
F 0 – 39 Critical failures identified. May include regulatory action, major financial irregularities, or serious governance breaches.

How Every Score Is Built

1

Article Commission & Research

Our editorial team identifies an organisation or programme for assessment. Researchers gather all publicly available data: annual reports, financial filings, programme evaluations, third-party audits, and media coverage from the past three years.

2

Subcategory Selection

The editor selects which subcategories are relevant to this specific organisation or report. An emergency relief organisation may not be assessed on SDG Alignment if that data is unavailable; a large INGO will typically be scored across all four categories.

3

Scoring Against Benchmarks

Each subcategory is scored 0–100 using our internal rubric, benchmarked against the performance of 180+ INGOs in our database. Scores are relative to sector peers, not absolute ideals — a score of 75 means the organisation performs better than approximately 75% of comparable organisations on that dimension.

4

Editorial Review

A second editor reviews the scores and supporting evidence before publication. Any score below 60 or above 90 requires documented justification. Overall grade overrides are applied where the average score would be misleading.

5

Organisation Notification

Where possible, we notify the organisation 48 hours before publication. Organisations may submit factual corrections or additional evidence to hello@charityjournal.org. We do not share draft scores before publication.

6

Annual Review

All scores are reviewed annually as new annual reports and data become available. Scores may increase or decrease. Significant changes trigger a new article rather than a silent update.

What We Use to Score

All scoring data must be publicly verifiable. We do not rely on self-reported data submitted directly to Charity Journal unless it is corroborated by independent sources.

📄 Annual Reports
Published annual reports and impact reports from the organisation's own website, covering the most recent 1–3 years.
💰 Financial Filings
Regulatory filings with charity commissions, Companies House, or equivalent national bodies. For US organisations: Form 990.
🔍 Third-Party Audits
Independent auditor reports, Charity Navigator ratings, GuideStar profiles, and sector watchdog assessments.
🌐 Programme Data
Outcome reports, evaluation studies, and programme data published on organisational websites or in academic databases.
📰 Media Coverage
Credible media reporting on organisational performance, controversies, or notable achievements from the past 36 months.
🏛️ Regulatory Records
Any regulatory actions, investigations, or compliance notices from relevant national or international bodies.

Challenging a Score

Charity Journal welcomes factual corrections from any organisation featured in our sustainability reports. If you believe a score contains a factual error — for example, we cited an outdated financial filing or misread programme data — please contact us at hello@charityjournal.org with the subject line Score Correction Request.

Please include the article URL, the specific subcategory in question, the data you believe is incorrect, and a link to the publicly available source that supports your correction. We aim to respond within 10 working days. Verified corrections result in a score update and an editorial note added to the published article.

We do not accept disputes based on disagreement with our editorial judgement, scoring rubric, or benchmark methodology — only factual errors in the underlying data.

Organisations: Submitting evidence that improves your score does not guarantee an update unless that evidence was publicly available at the time of original publication and was missed in our research. New data published after the article date will be considered at the next annual review.