2025-08-24
Image credit: Created for TheCIO.uk
Scottish pupils have already settled back into classrooms, while many English schools will open their doors in the first week of September. The return marks more than the end of summer; it is also a reminder of how dependent modern education has become on digital systems that need to be both available and secure. As teachers prepare lesson plans and pupils adjust to new routines, school leaders face a growing pressure to ensure that the technology underpinning everyday learning is resilient, compliant and protected against increasingly sophisticated cyber threats.
Schools are carrying more digital risk than ever, often with fewer hands and older kit. Breaches in the private sector make the headlines, yet classrooms and trust offices are an attractive target for criminal groups that value the mix of sensitive information, operational pressure and limited capacity to respond.
Parents expect security to be a given. The sector is trying, and many teams do a solid job with what they have, but the gap between risk and readiness is getting wider. Standards and expectations are moving faster than budgets, skills and contract cycles. The Department for Education has set out a clearer floor for good practice that covers risk assessment, identity and access, multi factor authentication, patching, backups and incident planning, with roles and responsibility sharpened in the 2024 and March 2025 updates. The wording that leaders will be held to is set out in the current DfE cyber security standards and the official updates log.
The scale of the problem is not in doubt. The official Cyber Security Breaches Survey 2024, education annex shows most secondary schools identified a breach or attack in the last year, with higher education and further education reporting even higher levels. Phishing remains the main way in across education settings. Primary schools are more likely than secondaries to outsource cyber security to a provider. Structured risk activity and testing are less common in schools than in colleges or universities, which hints at a familiarity gap as much as a resource gap.
Walk the estate and the pattern repeats. A cupboard server that should have retired two summers ago. Laptops that will not take the latest operating system. A wireless network that is fine until the first mock exams. A ticket queue that never quite reaches zero because the same flaky devices keep coming back. A trust office that relies on one person who knows every quirk in the setup. Contracts that read well until the first hour of an incident when nobody is quite sure who calls whom. None of this is unusual. It is the daily reality for many schools and trusts.
Keeping up asks for time, attention and a constant focus on the basics. Ageing infrastructure pushes costs into firefighting and out of planned improvement. Multi factor authentication is clearer in policy than it is on the ground. The standard is explicit that senior leaders and staff who handle confidential, financial or personal data must use multi factor authentication, and it encourages schools to extend that protection to all cloud services and to all staff where appropriate, as set out in the DfE standards. Training is too often a yearly tick in a learning portal rather than short, timely sessions that reflect how staff actually work. The same page points schools to free NCSC training for school staff and expects an annual cycle for users in scope. Backups exist in most places, but restore tests are less certain. The guidance calls for an approach that reflects the three two one principle, for termly tests, and for evidence that can be shown to insurers. Members of the Risk Protection Arrangement should note the cyber conditions in the RPA membership rules.
Roles and responsibilities with service providers are another weak seam. Many schools buy support that includes security but do not write down who owns the first hour of a crisis or how changes to identity, firewalls and backups are controlled and recorded. The DfE advises schools to ask for Cyber Essentials or Cyber Essentials Plus from suppliers and to map contracts to the controls the school must meet in the supplier expectations section.
Every request for a firewall refresh, a device replacement round or an identity project competes with classroom and welfare priorities. That is the context for most decisions. Even when small pots of money or frameworks exist, the bidding and compliance work is hard to absorb for small teams. The standards help. They say that a cyber risk assessment should be completed each year and reviewed every term, that data backup should be planned and tested, and that multi factor authentication should be used by senior leaders and by anyone handling sensitive or financial information. Anchoring spend to the DfE standards moves the conversation from optional to expected.
Large trusts can justify a chief information officer or a dedicated security lead. Many schools rely on a small internal team and an external provider to cover identity, devices, connectivity and day to day support. Recruiting and retaining people with current skills is difficult because public sector pay rarely keeps pace with private offers. In small schools, the lone technician can be isolated and short of time to learn. The standards acknowledge that reality by naming a senior leadership digital lead as accountable and by telling schools to seek outside help where skills are not available in house, set out under roles and accountability.
Security now touches a wider set of skills than a decade ago. It is not enough to keep antivirus up to date and patch servers. Schools need to understand cloud identity, conditional access, logging and alerting, incident response, supplier risk and insurance conditions. The NCSC 10 Steps is a simple lens for conversations with governors and senior leaders and lines up well with the DfE standards.
Schools work within UK GDPR and the Data Protection Act, and they should align with the DfE standards. Colleges are required to hold Cyber Essentials under their funding agreement. Schools are not required to certify, but the department encourages it and advises schools to ask suppliers for certification, as recorded in the standards and the DfE updates log. These points are worth writing into procurement, contract renewals and any review of a managed service.
Regulation sets the floor. Reputation sets the ceiling. Parents will assume the school uses modern, safe technology and sound practice. When a breach becomes public, the technical fix is only one part of the work. Community trust is harder to rebuild. The case for early investment is not only technical. It is also about confidence, transparency and the ability to show that the basics are in place and tested. For broader context, see the GOV.UK data protection guidance for schools and the ICO overview of children and UK GDPR.
Begin with a written cyber risk assessment and set a rhythm of review each term. Keep it short, name the owners and focus on what will change before the next holiday. Make sure a senior leadership digital lead is accountable and that governors see the risk register and the business continuity plan. Turn on multi factor authentication for senior leaders and for anyone who handles confidential, financial or personal data, as framed in the DfE standards. Extend coverage to administrator accounts and set out the path to bring all staff into scope where appropriate. Where a person needs accessibility adjustments, write them down and keep a record of the reasoning.
Tidy identity and access. Use unique credentials for all staff and pupils. Set sensible lockout rules. Follow NCSC guidance on passwords. Remove standing administrator rights wherever you can and add simple checks with HR for joiners, movers and leavers so that accounts follow the person and do not drift.
Fix backups and prove that they work. Keep protected copies that reflect the three two one principle. Test a restore each term, record the evidence and store the plan somewhere that does not rely on the system you are trying to recover. If you are in the Risk Protection Arrangement, note that cover depends on these practices and on annual training for users in scope, as set out in the RPA membership rules.
Secure the boundary you actually have. Check the firewall configuration. Protect available administrator interfaces with multi factor authentication. Make sure logs and alerts are enabled and someone will see them. If your broadband contract includes a managed firewall, sit down with the provider and map what they run to the wording in the DfE standard. Then write down who does what in an incident and share a one page flow that lists first actions, on call numbers and the information both sides will exchange in the first hour. Ask for proof of Cyber Essentials or Cyber Essentials Plus from your provider and keep it with the contract.
Move what you can to cloud services. The guidance is explicit that schools should use cloud solutions rather than local servers where possible, again set out in the DfE standards. If a system cannot move this year, record why and set a review date.
Finish the job on multi factor authentication. Bring all staff into scope. Choose methods that reduce the chance of tricking someone, especially for administrator accounts. Treat identity health as routine work.
Use the tools you already pay for. Many schools on Microsoft 365 or Google Workspace have baseline security features that are not yet switched on. Plan the rollout of endpoint protection, conditional access, email security, data loss prevention and identity risk signals. Tie every change back to the written risk assessment so the story is clear.
Improve monitoring and logging. Decide what you will collect, where you will keep it and who will look at it. Even simple steps such as forwarding audit logs for administrator actions and setting alerts for risky sign ins can cut the time it takes to see trouble. The DfE standard links to NCSC guidance on logging that can help define scope.
Test the plan, not just the backups. Run two tabletop exercises a year. Choose one scenario where a staff account is taken over after a phishing lure and one where shared drives are encrypted. Time the first hour. Write down what slowed you down and fix it. The NCSC Exercise in a Box service offers a structured path if you need it.
Raise the floor on patching and device health. Shorten deadlines for critical updates. Automate operating system and browser updates wherever you can. Measure compliance every week and chase what falls behind. The education annex to the 2024 breaches survey shows that primaries in particular have room to improve on structured risk identification and testing.
Bake supplier checks into buying. Ask for Cyber Essentials or Cyber Essentials Plus during procurement. For higher risk systems, ask how the supplier will help you meet your duties under data protection law and under the DfE standards. Keep the evidence with the contract and review it at renewal, as advised in the DfE standards.
Join a peer community and share what works. If you are a single school IT lead, do not work alone. Use local networks and LGfL security resources to compare notes and borrow practical guidance.
Technology matters, but people and process keep a school resilient. Training works best when it is little and often rather than a single annual push. Use the NCSC modules, run short refreshers after real incidents and make time in briefings to swap lessons learned. Keep the incident plan to a few pages so it is usable when things are busy. Agree escalation paths with your provider and link the contract to the controls you are expected to meet. Pick a few trusted staff in different parts of the school and ask them to act as security champions. Give them a clear route to report concerns and share tips.
Governors, head teachers and business managers set the tone. The standards place accountability with a senior leadership digital lead and expect governors to ask questions, to include cyber in the risk register and to carry digital risks into the business continuity plan, as set out in the DfE standards. Colleges must hold Cyber Essentials. Schools should consider certification for themselves and ask for it from suppliers. Treat it as a milestone that forces attention on the basics rather than a badge for the website. The requirement for colleges is recorded in the DfE updates log.
The data shows a sector that sees frequent attacks and is still catching up on some fundamentals. The standards are clearer than before about what to do and who is responsible. Put the two together and the message is simple. Without sustained investment in technology, people and partnerships, schools will not keep pace with current threats. Digital resilience needs to move from an information technology task to a school wide priority.
What is your take? Where does your school or trust feel most exposed right now, and what would make the biggest difference this term?
Let us share the good, the bad and the messy middle. What has worked, what has not, and what you would change next time.
About the Author
Ben Meyer is a senior IT and cybersecurity leader with a decade of experience spanning scale‑ups, global enterprises and the public sector (contract). He specialises in modernising IT operations, implementing cloud platforms, and embedding security and compliance frameworks aligned with ISO 27001 and GDPR. Ben has partnered with startups, charities and high-growth organisations to deliver pragmatic, people‑centred technology strategies. He’s held leadership roles across IT support, infrastructure and cyber, and is known for building reliable systems with a clear focus on risk, documentation and operational maturity.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-08-29
A newly revealed court filing shows the UK government sought sweeping access to Apple customer data, including non-UK users, through a Technical Capability Notice. The move raises serious privacy, security and accountability questions.
Image credit: Created for TheCIO.uk by ChatGPT
The UK government has quietly stepped into treacherous territory by seeking expansive access to Apple customer data including users outside its jurisdiction. A newly revealed court filing and expert commentary reveal a saga that has captured attention across the globe. This is more than a domestic push; it is a powerful test of the balance between national security and personal privacy, of legal secrecy and public accountability.
On 29 August 2025 the Financial Times disclosed that the UK Home Office issued a Technical Capability Notice under the Investigatory Powers Act that extended beyond UK borders. That revelation landed like a thunderclap. It confirmed that the notice demanded access to Apple’s standard iCloud service, not merely its optional Advanced Data Protection feature that offers end-to-end encryption. Moreover the order appeared to oblige Apple to provide data drawn from any user of iCloud globally including messaging content and saved passwords.
The Investigatory Powers Act, known colloquially as the “Snoopers’ Charter”, grants Britain sweeping surveillance powers. Section 253, invoked here, permits the issuance of Technical Capability Notices that compel companies to adjust their products or infrastructure to enable government access.w
This is not Apple’s first run-in with the Home Office. Reports indicate that earlier in 2025 the Home Office moved to issue a TCN specifically targeting Apple’s Advanced Data Protection system. That demand prompted Apple to withdraw the option altogether for UK users in February.
Until the FT court filing, much about the precise scope of the TCN remained shrouded in secrecy. Apple cannot publicly discuss the notice under the secrecy provisions of the Act. The Investigatory Powers Tribunal accordingly treated key facts in the case as assumed for the purposes of hearing the challenge, allowing the case to proceed without confirming or denying sensitive details.
Privacy advocates did not wait for leaks to act. In March 2025, Liberty and Privacy International teamed up with two individuals to challenge the TCN itself and the closed nature of the legal hearing. They demanded that the hearing be opened to public scrutiny and that the tribunal refrain from operating under a cover of secrecy.
Their plea found traction. By 7 April 2025 the Investigatory Powers Tribunal had lost its bid to suppress even the bare details of the Apple case from public view. Judges ruled that the identity of the parties and basic facts could be disclosed, rejecting the Home Office’s argument that such disclosure would harm national security.
Next, the tribunal set a case management order for a seven-day hearing in early 2026 to proceed largely in public under assumed facts. Other parties, including WhatsApp, moved to intervene.
The global dimensions of the notice sparked explosive reaction abroad. Last week, U.S. Director of National Intelligence Tulsi Gabbard confirmed that following extensive consultations including with President Trump and Vice-President JD Vance, the UK decided to withdraw its demand for an encryption back door into Apple systems.
The decision was reported widely in the U.S. press as a triumph for civilian rights and transatlantic diplomacy. The Washington Post noted that the UK had pulled back in the face of criticism over civil liberties and concerns under the CLOUD Act. Privacy advocates, while welcoming the reversal, emphasised that the underlying legal authority to compel breaches of encryption remains intact.
Digital rights advocates are not celebrating just yet. The Investigatory Powers Act and associated regulations still allow for broad demands to be issued. Experts caution that without legislative reform the door remains ajar for government intrusion into encrypted systems.
Moreover the mechanics of the legal challenge rest on assumed facts rather than full disclosure. That raises concerns that even if Apple prevails the specific details may never fully emerge.
This episode resonates on multiple levels. It exposes a profound tension between government efforts to strengthen national security and the fundamental rights to privacy and encryption. Apple’s decision to cut ADP in the UK reinforced its commitment to security, but still left the possibility of compelled back doors looming for ordinary iCloud users globally.
It also underscores how secrecy provisions in law can be weaponised to shield state activity from democratic oversight. That shadowy axis of security legislation runs counter to the principles of open justice.
Internationally it triggered reaction. U.S. officials, civil society, and media framed this as a potential transgression on American citizens’ rights. The prospect of state-mandated vulnerabilities in encryption alarmed even moderate figures in Washington. Gabbard and others warned that compliance with such demands could violate FTC law and undermine both constitutional rights and trust in technology.
Apple’s case is set to be heard in early 2026. The tribunal will be working with “assumed facts” designed to protect official secrets while allowing public debate. Judges have set a timeline, Apple and the Home Office must agree the scope of those facts by 1 November 2025.
Civil society and industry observers will be paying close attention. If Apple succeeds, there may be precedent to limit future notices. If not, the legal threshold for government-mandated access to encrypted data might be lower than many think.
In parallel, experts are calling for legislative change. Without revision, the IPA continues to expose all users, inside the UK and overseas, to potential forced weakening of encryption.
This confrontation between Apple and the UK government may represent a turning point. It highlights three enduring truths.
First, in the face of official secrecy and sweeping laws, sunlight remains the best disinfectant. Transparency and open judicial scrutiny are essential to preserving essential liberties.
Second, encryption is not a glitch or luxury. It is a cornerstone of digital trust, privacy, and security. Undermining it undercuts not just individual safety, but the integrity of digital economies and democratic life.
Third, surveillance capabilities must always be balanced against civil liberties. Without firm guardrails and democratic visibility, the law becomes a lever for unchecked intrusion.
As Apple and human rights groups push back, they do more than defend a corporation, they defend the principle that some doors must remain locked, from governments as well as criminals.
The hearing in 2026 may deliver clarity. Until then the world watches as the legal frameworks and fundamental values of privacy, security, and surveillance collide in open court.
What’s your take? Should governments ever compel back-door access to encrypted data, or is this a line that must never be crossed?
2025-08-28
A four-year cybercrime campaign targeting Mexican banks reveals just how resilient, regional and relevant financially-motivated threat actors remain – and why the UK financial sector cannot treat it as someone else’s problem.
Image credit: Created for TheCIO.uk
For almost four years, a small, disciplined group of criminals has taken aim at Mexican banks, retailers and public bodies, exfiltrating credentials and emptying accounts. Researchers who finally stitched the evidence together call the gang Greedy Sponge, a name borrowed from a SpongeBob meme once spotted on their command-and-control server.
The criminals’ latest campaign, revealed this week, shows a sharp uptick in capability. Instead of the vanilla remote-access tools that first drew attention in 2021, Greedy Sponge now delivers a heavily customised variant of AllaKore RAT alongside the multi-platform proxy malware SystemBC. Together, the pair gives attackers persistent footholds, covert tunnels and a menu of plug-ins to siphon money at will.
Greedy Sponge may feel distant, confined to Mexican institutions. Yet its tools, patience and operational discipline send a warning that extends far beyond Latin America. For British financial leaders, the lesson is blunt: geography is no longer a firewall.
Initial access still begins with people. Victims receive zipped installers purporting to be routine software updates. Inside sits a legitimate Chrome proxy executable and a trojanised Microsoft Installer.
Run the file and a .NET downloader named Gadget.exe quietly reaches out to Hostwinds infrastructure in Dallas, pulls down the modified AllaKore payload and moves it into place. The loader even cleans its own tracks with a PowerShell script so nothing obvious remains in %APPDATA%. It is careful, boring, and effective — the kind of intrusion that does not light up a SIEM dashboard until money is already moving.
Greedy Sponge was once content to geofence victims client-side, checking IP addresses before releasing the final stage. The group has now shifted that logic server-side, a subtle change that blinds many sandboxes and threat hunters.
By handing the decision to the server, the criminals limit forensic artefacts and make it harder for defenders outside Mexico to replicate the kill chain. The network map is small but resilient: phishing domains, RAT control servers and SystemBC proxies all sit in neat clusters, registered through offshore companies and hosted in the same American data centre.
It is a reminder that scale is not always the objective. A tight, disciplined infrastructure can evade takedowns and stay online far longer than sprawling botnets.
AllaKore is open-source, written in Delphi and first surfaced back in 2015. Open-source malware often ends up discarded or replaced, yet Greedy Sponge has treated it as a living project.
Their fork now grabs browser tokens, one-time passwords and banking session cookies, wrapping the loot in structured strings for easy ingestion at the back end. Once entrenched, the RAT fetches fresh copies from z1.txt and drops secondary payloads via SystemBC proxies. The operation looks methodical, suggesting a tiered workforce: entry-level operators handle phishing while more skilled colleagues sift stolen data and run fraud at scale.
In cyber crime, longevity is often underestimated. What defenders dismiss as “old” can still bleed institutions dry when packaged with new tricks.
Three traits stand out:
Operational patience. Four years is an eternity in cyber crime circles. This crew has not chased quick ransomware payouts; it has refined tooling until the infection chain is almost mundane.
Regional intimacy. Spanish strings inside binaries, lures themed on the Mexican Social Security Institute and netflow showing remote desktop traffic from Mexican IPs point to local knowledge and comfort operating near home turf.
Incremental upgrades. Moving geofencing server-side, bolting in SystemBC, adding UAC bypasses via CMSTP — each tweak raises the bar without triggering a brand-new hunting signature.
This is not smash-and-grab. It is slow cooking, with every change carefully tasted before it is served to victims.
Greedy Sponge is not the first financially-motivated crew to grow from local to global impact. Carbanak began with targeted intrusions against Eastern European banks in 2013 before spilling into Western institutions, with estimated thefts exceeding one billion US dollars. TrickBot evolved from a small banking trojan into a modular platform rented out to ransomware gangs worldwide.
Even Lazarus, the North Korean-linked group behind the Bangladesh Bank heist in 2016, showed how a crime born of local compromise could ripple across the global financial system.
These precedents underline the risk: tools refined in Mexico today can be franchised or sold into Europe tomorrow.
The International Monetary Fund has linked nearly one-fifth of total financial losses worldwide to cyber incidents. In 2024 alone, destructive attacks against banks rose thirteen per cent, according to multiple threat intelligence reports.
Financial crime is a marketplace. Malware, access and stolen credentials circulate like commodities. Greedy Sponge may have begun in Mexico, but its harvest can feed fraud operations anywhere.
The geography of compromise no longer dictates the geography of loss.
British lenders have weathered recent storms better than many peers. Freedom of Information data shows the FCA logged 53 per cent fewer cyber notifications from regulated firms in 2024 than the year before, crediting tighter operational resilience rules for the fall.
Yet the same dataset confirms that vendor incidents and data-exfiltration events remain stubborn risks. Greedy Sponge’s knack for secondary infections and geofenced payloads speaks directly to that threat: if a UK supplier with operations in Latin America is compromised, credentials harvested abroad can still unlock systems in London.
A call centre in Monterrey, a development team in Guadalajara or a shared service hub in Mexico City can all act as stepping stones into the UK core banking estate.
Chaucer Group’s analysis of 2023 breaches put the number of UK citizens affected by attacks on financial services at more than twenty million, a rise of 143 per cent year-on-year. Those figures reflect an ecosystem in which stolen data moves fast.
A credential skimmed from a Mexican multinational with a London subsidiary is just as valid on a British banking portal. A cookie stolen from a contractor’s remote session can be replayed against an FCA-regulated payment switch.
The sponge analogy is apt. Quiet absorption in one region eventually drains customers half a world away.
Greedy Sponge reinforces a simple mantra: controls must travel with data, not with office locations.
If your firm operates call centres, development shops or outsourced back-office teams in Latin America, credential harvesting there becomes a direct threat to UK core banking. Zero-trust principles, privileged access management and mandatory hardware tokens are the modern seat belts.
They are the difference between a phish leading to an isolated workstation rebuild and an attacker replaying session cookies against the production payment switch.
Indicators tied to this campaign include the PowerShell filename file_deleter.ps1, the .NET user-agent string mimicking Internet Explorer 6 and the Hostwinds IP range 142.11.199.*.
Blocking those artefacts buys time, but reliance on static indicators of compromise is a losing race. The smarter route is behavioural: alert on unsigned MSI executions that spawn PowerShell, on any network request with the vintage MSIE 6 user-agent and on outbound connections to port 4404.
Criminals evolve fast. Behavioural signals evolve slower, and defenders can use that inertia to their advantage.
Every UK lender now embeds suppliers deep inside payments, analytics and customer service flows. A pre-production environment in Monterrey running on a contractor’s laptop can bridge, via VPN, into a London data centre.
Greedy Sponge already exploited that scenario domestically by moving laterally from retail to banking networks. The same tactic, exported, would let criminals bypass hardened internet perimeters and walk in through trusted third-party tunnels.
Controlling and segmenting supplier access is no longer a compliance hygiene task. It is a front-line defence.
The Bank of England and the FCA are finalising rules that label certain cloud and IT suppliers “critical”. Under the proposals, outages or compromises at those providers could trigger direct intervention by supervisors.
Boards tempted to treat geofenced Latin-American malware as someone else’s problem will find less room to hide. Regulators increasingly expect firms to model and test cross-border attack paths, just as they rehearse liquidity stress scenarios.
Ignoring regional campaigns is no longer an option when supervisors demand proof that attack paths have been mapped, tested and mitigated.
It is tempting to dismiss AllaKore and SystemBC as yesterday’s malware. Yet the persistence of such tools reveals uncomfortable truths. Old codebases offer reliability. Open-source means multiple groups can fork and improve them. And familiarity makes detection harder, as defenders may downgrade alerts on “known” malware families.
Greedy Sponge’s success with AllaKore is proof that novelty is overrated. Steady refinement often beats innovation in the criminal toolkit.
Defenders rarely need silver bullets. They need consistency. Small, boring controls applied daily matter more than headline-grabbing solutions.
Teach staff to doubt unexpected installers. Instrument networks to recognise odd user-agents. Enforce multi-factor authentication even on staging environments.
These steps are not glamorous, but neither is Greedy Sponge. Both attacker and defender win through relentless repetition.
Greedy Sponge did not invent zero-day exploits or novel encryption. They packaged known tools, tuned them carefully and taught staff to follow a script.
Defenders can mirror that discipline. Cyber resilience is rarely heroic; it is the accumulation of small steps taken every single day.
The sponge analogy holds. Slow, quiet absorption eventually drains the victim. The antidote is equally unglamorous: keep wringing out the risk before it saturates your estate.
2025-08-28
A cyber attack on APCS and its software supplier has left thousands of people vulnerable to identity theft. With sensitive data exposed across sectors, the breach highlights the fragility of supply chains, fragmented accountability, and the collapse of trust in systems designed to safeguard.
Image credit: Created for TheCIO.uk by ChatGPT
A cyber attack on the software system used by Access Personal Checking Services (APCS) has placed thousands at risk of identity theft. The gravity lies not only in the type of data exposed, but in the purpose of the service itself. Background checks through the Disclosure and Barring Service (DBS) exist to protect children and vulnerable adults. To find that the systems designed to safeguard instead became a liability raises profound questions about governance, resilience and trust.
APCS is the UK’s self-described fastest DBS checking service (APCS official site), working with more than nineteen thousand organisations across healthcare, education, charities, finance and religious institutions. While much of the early reporting focused on dioceses, the exposure stretches far wider. This was not a niche church systems failure. It was a supply chain breach affecting an umbrella body relied upon across multiple regulated industries.
The breach originated with APCS’s external software developer, Intradev, based in Hull. Certified under the UK National Cyber Security Centre’s Cyber Essentials programme, Intradev detected unauthorised malicious activity in its systems on 4 August 2025. Managing director Steve Cheetham described it as a “significant IT incident”, without confirming whether ransomware was involved.
Containment measures were put in place and the incident was reported to the Information Commissioner’s Office (ICO) and Action Fraud. Crucially, APCS’s own production systems were not directly compromised, but the developer’s environment appears to have contained sensitive records. This raises questions about segmentation between development, test and live systems — and whether principles such as least privilege and encryption were adequately enforced.
APCS has stated that it does not hold card details or criminal conviction data. But the personal identifiers at risk are still highly sensitive. Records include names, dates and places of birth, addresses, gender, National Insurance numbers, passport details and driving licence numbers.
Winchester Diocese clarified that compromised data consisted of text-based fields rather than scanned images (Winchester update), a detail that may reduce the risk of document forgery but does nothing to mitigate the fraud potential of raw identifiers.
The confirmed breach window stretches from December 2024 to May 2025, though Worcester indicated exposure may have started as early as November 2024 (Worcester statement). That represents months of DBS applications potentially exposed, and even if only a fraction of records were taken, the scale is significant.
Further reporting has shown that the breach extends beyond church use into education. Schools Week highlighted that school staff records stored in single central record systems were potentially exposed (Schools Week), broadening the scope of risk into the education sector. Legal guidance for schools and data controllers quickly followed, including recommendations from Browne Jacobson on regulatory reporting and safeguarding obligations (Browne Jacobson).
Once notified, APCS alerted its client organisations — who are themselves the data controllers under UK GDPR. Here the fragmentation became visible. Some institutions urged affected individuals to sign up for identity monitoring services. Others paused all DBS checks through APCS. A few insisted parishes or branches handle communication independently.
For volunteers and employees, the result was confusion. Should they expect direct contact from APCS, their employer, or a third-party service? For IT leaders, the lesson is stark: inconsistent messaging compounds harm. Crisis communication must be centralised, clear and coordinated.
From a compliance perspective, reports were filed to the ICO, Action Fraud and in some cases the Charity Commission. That demonstrates baseline regulatory diligence, but the divergence in organisational responses may invite further scrutiny. The ICO has repeatedly signalled that accountability cannot be outsourced — even if the immediate failure is a supplier’s.
The breach illustrates a familiar pattern of technical fragility across software supply chains. Developers sometimes use live production data in test environments without anonymisation, creating unnecessary exposure if those environments are compromised. Segmentation between development and production can also be weak, allowing intruders to pivot across systems. References to “text-based” data point to storage choices that may not have included encryption at rest. And where vendors retain broad access privileges without granular controls, a compromise of one environment can cascade into multiple clients.
These are not unique failings of APCS or Intradev. They are endemic across supplier ecosystems where speed and cost efficiency are prioritised over resilience.
For individuals, the risks are direct. With National Insurance numbers, passport and driving licence details, criminals can attempt impersonation, credit fraud or targeted phishing. Services such as Experian’s Identity Plus, offered to affected individuals by some dioceses (Southwark statement), provide a layer of protection, but only for a limited period. The shelf life of stolen data is long, and fraud attempts can surface years after the monitoring stops.
For organisations, the reputational damage can be severe. APCS marketed speed as its differentiator. Yet when “fastest” becomes synonymous with weakest, the long-term cost to trust can outweigh any operational benefit. For clients in healthcare, finance or education, continuing to rely on a provider now publicly associated with a breach carries its own risks.
The APCS breach underscores why supplier oversight cannot be reduced to certification logos. For IT leaders and boards, resilience depends on more than internal controls. It requires interrogation of suppliers’ data handling practices, segregation of environments, use of anonymised test data, encryption at rest and in transit, and clear contractual obligations around incident response. Leaders should insist on verifiable evidence, not marketing claims, and demand assurance through regular independent testing and reporting.
“If your vendors fail, you fail — in the eyes of regulators, the public and those you serve. Supply chain resilience is not optional. It is the frontline of trust.”
Certification such as Cyber Essentials signals a baseline commitment, but it is not a guarantee of resilience. A logo on a tender document is no substitute for visibility into how a vendor actually manages and protects sensitive data.
This breach sits within a wider pattern of institutional exposure. The British Library ransomware attack in 2023 saw 600GB of data leaked online. The Legal Aid Agency incident in early 2025 exposed millions of records. Each case involved trusted institutions where sensitive information is central to public service. The APCS breach adds a further dimension by showing how attackers can target the supply chain to reach data indirectly.
This was not just a data breach. It was a breach of confidence in the very systems intended to protect. When background checks become an attack surface, safeguarding collapses into liability.
For IT leaders, the lesson is clear. Resilience depends on the strength of every link in the supply chain. If your vendors fail, you fail — in the eyes of regulators, the public and those you serve. Operational efficiency must never come at the cost of resilience.
The APCS breach is a frontline reminder that data protection is not an IT back-office issue. It is a leadership responsibility, tied to safeguarding, trust and legitimacy. Unless supplier resilience is treated with the same seriousness as in-house controls, incidents like this will continue to erode confidence in the institutions people rely on most.
In the end, the question every IT leader must ask is simple: if your supplier was breached tomorrow, would you still be trusted the day after?
What’s your take? Do you believe organisations are taking third-party risk seriously enough, or will incidents like this keep repeating?
Let’s share the good, the bad and the messy middle of managing trust in our supply chains.
2025-08-27
The discovery of PromptLock – the first AI-powered ransomware – signals a new era in cyber threats. By leveraging local large language models, this proof of concept marks a turning point in how ransomware can adapt, evade, and scale beyond traditional defences.
Image credit: Created for TheCIO.uk
In a development that reads like a page from tomorrow’s tech thriller yet remains very much rooted in today’s threat landscape, cybersecurity researchers have uncovered what appears to be the first instance of ransomware built with genuine AI capability. Dubbed PromptLock, this malware represents a new frontier in how attackers might weaponise artificial intelligence. Far from theoretical musings, PromptLock signals a tangible shift, with criminals crafting malware that not only encrypts and steals data but does so by leveraging local large language models to generate malicious code dynamically.
This breakthrough was reported by ESET researcherswho analysed malware samples uploaded to VirusTotal and determined that PromptLock uses a local AI model to drive its operations. Its discovery raises profound concerns about how quickly threat actors could employ AI to scale threat sophistication and evade detection.
PromptLock is written in Go and targets Windows, Linux, and macOS environments. What sets it apart is the integration of AI directly into its attack chain rather than relying on static payloads or precomposed scripts. The malware makes use of gpt-oss-20b, an open-weight large language model developed by OpenAI. By running the model locally via the Ollama API, ransomware architects avoid making outbound requests to commercial AI providers, effectively evading scrutiny and attribution.
The sequence of operations unfolds like this: inside the compromised system the malware triggers a local instance of gpt-oss-20b, supplying it with hard-coded prompts to produce Lua scripts. Those scripts perform a range of malicious activities: enumerating the file system, inspecting and exfiltrating files, and applying encryption using the NSA-developed SPECK 128-bit algorithm. In essence, the AI model composes payloads on the fly, swapping static code for responsive, bespoke instructions based on the environment it inhabits.
Strikingly, ESET also found that whilst PromptLock does contain code suggesting destructive capabilities, such as file deletion, those routines appear to be unfinished or inactive at this stage. That, combined with other contextual evidence, suggests that what we are seeing is likely a proof of concept, still under development rather than an actively deployed malicious tool.
Traditional ransomware relies on predefined code and behaviour. Analysts can trace signatures, predict threat patterns, or contain outbreaks using known indicators of compromise. PromptLock disrupts that model in two critical ways.
Firstly, it introduces non-determinism. Since AI models generate outputs that vary, even when given the same prompt, each execution of the malware could look different. This variability hampers signature-based detection. As one researcher explained, "indicators of compromise may vary from one execution to another," making defences far more complex.
Secondly, by processing AI locally, the malware obviates the need for external communication with AI service providers. That shields attackers from potential exposure and intrusion detection that might occur when connecting to cloud services.
Beyond its novelty, the very concept of malware adapting in real time to its environment, composing tailored commands based on local data, marks a new class of threat... one that combines adaptability with anonymity, speed and technical sophistication.
PromptLock arrives at a time when AI is already disrupting cyber offence and defence dynamics. Organisations, particularly in the UK, must anticipate the arrival of smarter, more flexible malware.
Endpoint defences need to monitor for anomalies such as unexpected executions of Lua, Go-based binaries and local AI processes. Behavioural analysis must evolve to detect unexpected contexts.
Network monitoring should flag suspicious tunnelling to local AI APIs, especially Ollama-like infrastructure or traffic patterns moving data from endpoints to internal AI servers.
Threat intelligence frameworks must shift from relying solely on static signatures to context and behaviour. PromptLock variants may evade detection unless defences adapt to recognise AI-generated sequence patterns.
Policy enforcement needs updating. If organisations adopt AI agents for automation or analysis, they must ensure those agents operate in secure, compartmentalised environments. Without proper safeguards, such systems can be hijacked or turned inward.
In short, PromptLock is not just another malware; it is a harbinger. Security teams need to prepare for active AI agents as adversaries, not merely static code.
While PromptLock appears to be the first AI-powered ransomware detected in the wild or near the wild, it is not the only project in the space. Researchers had previously explored AI-guided ransomware in academic contexts.
For instance, as reported in itnews.com.au, arxiv.org RansomAI, a reinforcement-learning framework developed in mid-2023, shows how ransomware could adapt its encryption behaviour to evade detection while maximising damage, though it was experimental and targeted hardware like Raspberry Pi.
Similarly, EGAN, a generative adversarial setup from May 2024, focused on producing ransomware mutations that evade modern antivirus solutions using AI-enhanced mutation strategies.
Though both are theoretical exercises, they underscore that the concept of “intelligent malware” is not science fiction—it is a subject of active research. PromptLock brings us closer to that unsettling reality.
Leading cybersecurity voices warn that PromptLock’s emergence is the tip of the iceberg. As one expert put it on X:
“We are in the earliest days of regular threat actors leveraging local / private AI. And we are unprepared”.
ESET themselves emphasised the significance of the discovery on their official research channel:
“ESET Research discovered PromptLock, the first known AI-powered ransomware. Written in Go and using gpt-oss-20b through Ollama, it demonstrates how threat actors could use local LLMs to generate malicious payloads and evade traditional detection”.
These warnings reinforce the gravity of the moment. While PromptLock may still be embryonic, the blueprint is out in the open.
What does PromptLock’s discovery mean for the near future of cyber threats and defences?
Rapid Evolution of Malware
If attackers can deploy AI models, whether open-weight or proprietary, within their malicious infrastructure, malware becomes not only more flexible but easier to adapt and harder to predict.
Proliferation of AI Toolkits
As models like gpt-oss-20b and frameworks like Ollama gain popularity, attackers lose barriers to entry. Open-source AI reduces costs and raises the threat ceiling quickly.
Arms Race in Detection Tools
Defenders must invest in AI-powered detection themselves. These systems must be capable of recognising dynamic, generative attacks that adapt in real time. New defences may include AI-based anomaly detection, deep behavioural monitoring, and AI sandboxing.
Policy and Regulation Challenges
How do regulators respond when AI becomes a weapon in criminal toolkits? Discussions over AI usage, access control, logging, and traceability gain urgency.
Rethinking Incident Response
Traditional IR approaches assume consistent behaviour and predictable traces. Now responders must be prepared for dysregulated, randomised attack logic that defies conventional pattern matching.
PromptLock does not yet appear to have infected targets in the wild. It remains, for now, a proof of concept. But that does not lessen its significance. Instead it amplifies the warning: the mechanisms and techniques exist. All that is needed is for threat actors to deploy them at scale.
In the UK and beyond, organisations must treat this moment as a turning point. The revolution of cyber threats is not merely AI-augmented… it is AI-powered.
CISOs and security teams must embrace smarter defences, update detection regimes, constrain internal AI agents, and stress test infrastructure against generative threat logic.
The future of ransomware may no longer carry the fingerprints of its creator. Instead, it may arrive as the output of an AI, tailored precisely to its environment and destined to remain one step ahead.
Corroborating the details of PromptLock across several trusted outlets reinforces its significance.
Together these sources paint a consistent picture: PromptLock is a novel, embryonic threat, a notable departure from static ransomware of the past.
2025-08-26
UK banks are balancing legacy technology, an evolving threat landscape and growing regulatory demands. The sector’s ability to modernise at pace will define not just its resilience but its credibility in the eyes of customers and regulators alike.
Image credit: Created for TheCIO.uk
The UK banking sector is under renewed pressure to modernise its cyber security. For years, banks have been seen as some of the most mature organisations in the way they handle cyber risk. Yet the reality is more complex. Legacy systems, fragmented digital estates, and an expanding attack surface have left cracks in the armour. Attackers have noticed.
This summer has seen an uptick in incidents and warnings directed at UK financial institutions. Ransomware groups are testing their luck with extortion campaigns. State-backed actors are probing critical systems, while fraudsters exploit the gaps between customer expectations and the ability of banks to keep their channels secure.
The core issue is that cyber security is no longer about perimeter defence or compliance checklists. It is about resilience. And that requires modernisation at scale.
Banks are uniquely exposed to legacy technology. Decades of mergers, acquisitions and rapid digital expansion have left many institutions with a patchwork of systems. Some of these platforms are still running on out-of-support operating systems or applications that were never designed to interact with modern architectures.
For IT leaders inside banks, this creates a paradox. These systems are too critical to simply replace, yet too outdated to properly secure. Modernisation programmes are underway in most institutions, but they take time, money and political capital. In the meantime, adversaries exploit known vulnerabilities in older systems, often finding the weakest link in a supply chain rather than breaching a fortified core.
The more time legacy systems remain operational, the greater the burden on cyber security teams to defend the indefensible.
Banking is one of the few sectors where customers still expect absolute reliability. A retail customer may tolerate glitches from a streaming service or an e-commerce platform, but if their bank suffers an outage or a breach, trust is shattered immediately.
This trust deficit makes banks prime targets. Attackers know that even minor service disruptions can generate panic, headlines and regulatory scrutiny. A phishing campaign against customers, a credential stuffing attack on mobile apps, or a ransomware hit on a payments processor all carry reputational risk far beyond the initial compromise.
As customers increasingly engage with banks through digital channels, the attack surface widens. Mobile apps, open banking APIs, cloud-based services and instant payments all bring innovation and convenience. They also bring complexity, dependencies and fresh vectors for exploitation.
The race to modernise is therefore not only about operational resilience, but about preserving customer confidence.
The Prudential Regulation Authority (PRA), the Financial Conduct Authority (FCA) and the Bank of England have all stepped up their expectations around operational resilience. UK regulators are clear: banks must be able to withstand and recover from disruptive cyber events.
The new rules on important business services and impact tolerances are shifting boardroom conversations. It is no longer enough to focus on recovery times. Institutions must map dependencies, test their assumptions and prove that critical services can continue even under sustained attack.
Meanwhile, the Digital Operational Resilience Act (DORA) in the European Union is raising the bar for international banks with cross-border operations. Even though DORA is EU legislation, its ripple effects are felt in London. Global institutions cannot afford to run resilience to different standards in different markets.
The regulatory message is consistent: cyber resilience is now a core component of financial stability. Boards are accountable, and excuses are no longer tolerated.
For banks, the financial impact of cyber incidents goes far beyond fines. The direct costs of responding to a breach include investigation, recovery, customer compensation and system rebuilds. Indirect costs include lost business, higher insurance premiums, increased borrowing costs and reputational harm.
History provides clear lessons. The 2018 TSB IT migration failure left millions of customers locked out of accounts, costing the bank hundreds of millions of pounds and damaging its reputation for years. While that incident was more about IT failure than a direct cyber-attack, it shows how technology weaknesses can quickly spiral into systemic issues.
Ransomware groups are also evolving. Rather than encrypting systems and hoping for a payout, many now focus on double or triple extortion, stealing sensitive data and threatening to release it unless payment is made. For a bank, the release of customer information is not just a data protection issue. It is a trust crisis that regulators, politicians and the public will not forgive easily.
While legacy systems are a major weakness, innovation brings its own risks. The rapid adoption of artificial intelligence, machine learning and automation within banking is reshaping operations. Fraud detection is faster, customer service is more efficient, and risk models are more dynamic. Yet AI also introduces opaque decision-making processes, data governance concerns and new avenues for adversarial manipulation.
Similarly, the push to cloud brings agility but also dependence on third-party providers. Banks are increasingly reliant on hyperscale cloud vendors to host critical services. While these providers invest heavily in security, the concentration risk is real. A disruption at a single provider could cascade through the sector. Regulators are acutely aware of this, which is why operational resilience is not just about the bank itself but its entire ecosystem.
Technology is only part of the equation. Human behaviour remains one of the most significant risks in banking cyber security. Phishing, business email compromise and social engineering are still responsible for a disproportionate number of breaches.
Banks have invested heavily in awareness campaigns and simulated phishing exercises, but fatigue is setting in. Employees are overwhelmed by security training, alerts and procedures. At the same time, the pressure to deliver digital transformation at speed can lead to shortcuts that weaken security.
CISOs and IT leaders in banking are therefore under pressure to balance strict security controls with business agility. Achieving this balance requires cultural change, not just technical fixes. Security must be embedded into decision-making at every level, from product design to customer service.
In UK banks, cyber security is now firmly a board-level issue. The days when it could be delegated to the IT department are over. Directors are personally accountable under regulatory frameworks, and they face questions from investors, customers and Parliament when things go wrong.
Board engagement is improving, but challenges remain. Many directors lack deep technical expertise, and translating cyber risk into financial and operational terms is still a work in progress. CISOs must become storytellers, articulating not just threats but the business case for investment.
This shift in governance is positive, but it adds pressure. Boards are less tolerant of uncertainty, and they expect clear answers. The problem is that cyber risk is inherently uncertain. The question is not whether banks will be attacked, but when and how effectively they can respond.
No bank can defend itself in isolation. The sector has long recognised the value of intelligence sharing, and initiatives such as the Financial Sector Cyber Collaboration Centre (FSCCC) and the Bank of England’s CBEST framework are now well established.
These initiatives are critical, but they require active participation. Smaller institutions sometimes lack the resources to fully engage, leaving them more exposed. At the same time, adversaries are increasingly collaborating across borders, trading tools and techniques on underground forums.
To keep pace, UK banks must deepen their collaboration not only with each other but also with telecoms providers, cloud vendors, government agencies and even competitors. Cyber defence is becoming an ecosystem challenge, not a solitary one.
Like every sector, banking faces a cyber skills shortage. Experienced security professionals are in high demand, and banks must compete with technology firms, consultancies and government agencies to attract talent.
The stakes are higher in financial services. The skills shortage cannot be solved with recruitment alone. Upskilling existing staff, automating routine tasks, and investing in security orchestration and AI-driven threat detection will all be essential.
If banks cannot close the skills gap, they risk overburdening their teams and missing emerging threats. The pressure to modernise is therefore also about modernising how the workforce is supported, trained and augmented.
The next decade will determine whether UK banks can stay ahead of their adversaries. Cyber threats are not static, and neither can defences be. Quantum computing, deepfake-enabled fraud, AI-driven malware and state-backed campaigns will all redefine the risk landscape.
For banks, the imperative is clear: modernise now or be left exposed. That means accelerating legacy replacement programmes, embedding security into digital transformation, strengthening governance and deepening collaboration across the sector.
The UK banking sector has long been a global leader. But leadership is not a static position. It must be earned repeatedly, especially in cyber security. The pressure to modernise is not just about compliance or resilience. It is about safeguarding the trust that underpins the entire financial system.
Cyber security in UK banks is no longer just a technical issue. It is a strategic priority that cuts across leadership, regulation, customer trust and operational resilience. The sector has some of the brightest minds, deepest pockets and strongest incentives to get it right. But that does not make it immune to failure.
The window for incremental change is closing. Attackers are innovating, regulators are tightening their grip, and customers are watching closely. The challenge for banks is to modernise before events force their hand. The cost of delay is measured not just in fines and losses, but in trust, reputation and the stability of the financial system itself.
2025-08-24
Schools are juggling ageing technology, squeezed budgets and thin teams while cyber threats rise. The standards are clearer, the stakes are higher, and the window for incremental change is closing.
Image credit: Created for TheCIO.uk
Scottish pupils have already settled back into classrooms, while many English schools will open their doors in the first week of September. The return marks more than the end of summer; it is also a reminder of how dependent modern education has become on digital systems that need to be both available and secure. As teachers prepare lesson plans and pupils adjust to new routines, school leaders face a growing pressure to ensure that the technology underpinning everyday learning is resilient, compliant and protected against increasingly sophisticated cyber threats.
Schools are carrying more digital risk than ever, often with fewer hands and older kit. Breaches in the private sector make the headlines, yet classrooms and trust offices are an attractive target for criminal groups that value the mix of sensitive information, operational pressure and limited capacity to respond.
Parents expect security to be a given. The sector is trying, and many teams do a solid job with what they have, but the gap between risk and readiness is getting wider. Standards and expectations are moving faster than budgets, skills and contract cycles. The Department for Education has set out a clearer floor for good practice that covers risk assessment, identity and access, multi factor authentication, patching, backups and incident planning, with roles and responsibility sharpened in the 2024 and March 2025 updates. The wording that leaders will be held to is set out in the current DfE cyber security standards and the official updates log.
The scale of the problem is not in doubt. The official Cyber Security Breaches Survey 2024, education annex shows most secondary schools identified a breach or attack in the last year, with higher education and further education reporting even higher levels. Phishing remains the main way in across education settings. Primary schools are more likely than secondaries to outsource cyber security to a provider. Structured risk activity and testing are less common in schools than in colleges or universities, which hints at a familiarity gap as much as a resource gap.
Walk the estate and the pattern repeats. A cupboard server that should have retired two summers ago. Laptops that will not take the latest operating system. A wireless network that is fine until the first mock exams. A ticket queue that never quite reaches zero because the same flaky devices keep coming back. A trust office that relies on one person who knows every quirk in the setup. Contracts that read well until the first hour of an incident when nobody is quite sure who calls whom. None of this is unusual. It is the daily reality for many schools and trusts.
Keeping up asks for time, attention and a constant focus on the basics. Ageing infrastructure pushes costs into firefighting and out of planned improvement. Multi factor authentication is clearer in policy than it is on the ground. The standard is explicit that senior leaders and staff who handle confidential, financial or personal data must use multi factor authentication, and it encourages schools to extend that protection to all cloud services and to all staff where appropriate, as set out in the DfE standards. Training is too often a yearly tick in a learning portal rather than short, timely sessions that reflect how staff actually work. The same page points schools to free NCSC training for school staff and expects an annual cycle for users in scope. Backups exist in most places, but restore tests are less certain. The guidance calls for an approach that reflects the three two one principle, for termly tests, and for evidence that can be shown to insurers. Members of the Risk Protection Arrangement should note the cyber conditions in the RPA membership rules.
Roles and responsibilities with service providers are another weak seam. Many schools buy support that includes security but do not write down who owns the first hour of a crisis or how changes to identity, firewalls and backups are controlled and recorded. The DfE advises schools to ask for Cyber Essentials or Cyber Essentials Plus from suppliers and to map contracts to the controls the school must meet in the supplier expectations section.
Every request for a firewall refresh, a device replacement round or an identity project competes with classroom and welfare priorities. That is the context for most decisions. Even when small pots of money or frameworks exist, the bidding and compliance work is hard to absorb for small teams. The standards help. They say that a cyber risk assessment should be completed each year and reviewed every term, that data backup should be planned and tested, and that multi factor authentication should be used by senior leaders and by anyone handling sensitive or financial information. Anchoring spend to the DfE standards moves the conversation from optional to expected.
Large trusts can justify a chief information officer or a dedicated security lead. Many schools rely on a small internal team and an external provider to cover identity, devices, connectivity and day to day support. Recruiting and retaining people with current skills is difficult because public sector pay rarely keeps pace with private offers. In small schools, the lone technician can be isolated and short of time to learn. The standards acknowledge that reality by naming a senior leadership digital lead as accountable and by telling schools to seek outside help where skills are not available in house, set out under roles and accountability.
Security now touches a wider set of skills than a decade ago. It is not enough to keep antivirus up to date and patch servers. Schools need to understand cloud identity, conditional access, logging and alerting, incident response, supplier risk and insurance conditions. The NCSC 10 Steps is a simple lens for conversations with governors and senior leaders and lines up well with the DfE standards.
Schools work within UK GDPR and the Data Protection Act, and they should align with the DfE standards. Colleges are required to hold Cyber Essentials under their funding agreement. Schools are not required to certify, but the department encourages it and advises schools to ask suppliers for certification, as recorded in the standards and the DfE updates log. These points are worth writing into procurement, contract renewals and any review of a managed service.
Regulation sets the floor. Reputation sets the ceiling. Parents will assume the school uses modern, safe technology and sound practice. When a breach becomes public, the technical fix is only one part of the work. Community trust is harder to rebuild. The case for early investment is not only technical. It is also about confidence, transparency and the ability to show that the basics are in place and tested. For broader context, see the GOV.UK data protection guidance for schools and the ICO overview of children and UK GDPR.
Begin with a written cyber risk assessment and set a rhythm of review each term. Keep it short, name the owners and focus on what will change before the next holiday. Make sure a senior leadership digital lead is accountable and that governors see the risk register and the business continuity plan. Turn on multi factor authentication for senior leaders and for anyone who handles confidential, financial or personal data, as framed in the DfE standards. Extend coverage to administrator accounts and set out the path to bring all staff into scope where appropriate. Where a person needs accessibility adjustments, write them down and keep a record of the reasoning.
Tidy identity and access. Use unique credentials for all staff and pupils. Set sensible lockout rules. Follow NCSC guidance on passwords. Remove standing administrator rights wherever you can and add simple checks with HR for joiners, movers and leavers so that accounts follow the person and do not drift.
Fix backups and prove that they work. Keep protected copies that reflect the three two one principle. Test a restore each term, record the evidence and store the plan somewhere that does not rely on the system you are trying to recover. If you are in the Risk Protection Arrangement, note that cover depends on these practices and on annual training for users in scope, as set out in the RPA membership rules.
Secure the boundary you actually have. Check the firewall configuration. Protect available administrator interfaces with multi factor authentication. Make sure logs and alerts are enabled and someone will see them. If your broadband contract includes a managed firewall, sit down with the provider and map what they run to the wording in the DfE standard. Then write down who does what in an incident and share a one page flow that lists first actions, on call numbers and the information both sides will exchange in the first hour. Ask for proof of Cyber Essentials or Cyber Essentials Plus from your provider and keep it with the contract.
Move what you can to cloud services. The guidance is explicit that schools should use cloud solutions rather than local servers where possible, again set out in the DfE standards. If a system cannot move this year, record why and set a review date.
Finish the job on multi factor authentication. Bring all staff into scope. Choose methods that reduce the chance of tricking someone, especially for administrator accounts. Treat identity health as routine work.
Use the tools you already pay for. Many schools on Microsoft 365 or Google Workspace have baseline security features that are not yet switched on. Plan the rollout of endpoint protection, conditional access, email security, data loss prevention and identity risk signals. Tie every change back to the written risk assessment so the story is clear.
Improve monitoring and logging. Decide what you will collect, where you will keep it and who will look at it. Even simple steps such as forwarding audit logs for administrator actions and setting alerts for risky sign ins can cut the time it takes to see trouble. The DfE standard links to NCSC guidance on logging that can help define scope.
Test the plan, not just the backups. Run two tabletop exercises a year. Choose one scenario where a staff account is taken over after a phishing lure and one where shared drives are encrypted. Time the first hour. Write down what slowed you down and fix it. The NCSC Exercise in a Box service offers a structured path if you need it.
Raise the floor on patching and device health. Shorten deadlines for critical updates. Automate operating system and browser updates wherever you can. Measure compliance every week and chase what falls behind. The education annex to the 2024 breaches survey shows that primaries in particular have room to improve on structured risk identification and testing.
Bake supplier checks into buying. Ask for Cyber Essentials or Cyber Essentials Plus during procurement. For higher risk systems, ask how the supplier will help you meet your duties under data protection law and under the DfE standards. Keep the evidence with the contract and review it at renewal, as advised in the DfE standards.
Join a peer community and share what works. If you are a single school IT lead, do not work alone. Use local networks and LGfL security resources to compare notes and borrow practical guidance.
Technology matters, but people and process keep a school resilient. Training works best when it is little and often rather than a single annual push. Use the NCSC modules, run short refreshers after real incidents and make time in briefings to swap lessons learned. Keep the incident plan to a few pages so it is usable when things are busy. Agree escalation paths with your provider and link the contract to the controls you are expected to meet. Pick a few trusted staff in different parts of the school and ask them to act as security champions. Give them a clear route to report concerns and share tips.
Governors, head teachers and business managers set the tone. The standards place accountability with a senior leadership digital lead and expect governors to ask questions, to include cyber in the risk register and to carry digital risks into the business continuity plan, as set out in the DfE standards. Colleges must hold Cyber Essentials. Schools should consider certification for themselves and ask for it from suppliers. Treat it as a milestone that forces attention on the basics rather than a badge for the website. The requirement for colleges is recorded in the DfE updates log.
The data shows a sector that sees frequent attacks and is still catching up on some fundamentals. The standards are clearer than before about what to do and who is responsible. Put the two together and the message is simple. Without sustained investment in technology, people and partnerships, schools will not keep pace with current threats. Digital resilience needs to move from an information technology task to a school wide priority.
What is your take? Where does your school or trust feel most exposed right now, and what would make the biggest difference this term?
Let us share the good, the bad and the messy middle. What has worked, what has not, and what you would change next time.
About the Author
Ben Meyer is a senior IT and cybersecurity leader with a decade of experience spanning scale‑ups, global enterprises and the public sector (contract). He specialises in modernising IT operations, implementing cloud platforms, and embedding security and compliance frameworks aligned with ISO 27001 and GDPR. Ben has partnered with startups, charities and high-growth organisations to deliver pragmatic, people‑centred technology strategies. He’s held leadership roles across IT support, infrastructure and cyber, and is known for building reliable systems with a clear focus on risk, documentation and operational maturity.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-08-23
Microsoft will throttle outbound email sent from onmicrosoft.com addresses to 100 external recipients per tenant per day. The aim is to cut abuse and push every customer to send from a verified custom domain. Here is what changes, who is affected, and the practical steps to take now.
Image credit: Created for TheCIO.uk by ChatGPT
Microsoft will throttle outbound email that is sent from a tenant’s default onmicrosoft.com address. The cap is 100 external recipients per organisation in a rolling 24 hour window. Internal mail is not affected. When you hit the ceiling, senders see an NDR with 550 5.7.236. Microsoft’s Exchange Team says the change is designed to stop abuse of shared onmicrosoft domains and to nudge every customer to send from a vetted custom domain with proper authentication. A phased rollout starts 15 October 2025 for trial tenants and completes 1 June 2026 for the largest estates.
Source: Microsoft Exchange Team announcement, August 2025.
When you create a Microsoft 365 tenant, you receive a default email domain in the form tenantname.onmicrosoft.com. This is the MOERA address. It helps you get up and running quickly, but it was never meant to be the long term sending identity for communication with customers, partners or the public.
Microsoft is now enforcing that intent. Messages sent to external recipients from a MOERA address will be throttled. The tenant wide cap is 100 external recipients per 24 hour rolling window. Distribution lists expand before counting, so one message to a large external list can consume the allowance. Internal mail is out of scope. Once throttled, senders receive non delivery reports with code 550 5.7.236. The Exchange Team sets out the changes, the reason, and the edge cases in its announcement.
The abuse pattern is simple. Spammers spin up fresh tenants and blast out spam from new onmicrosoft addresses before reputation systems have any signal. That drags down deliverability for everyone who shares the namespace. The throttle tackles this by limiting the blast radius and by pushing customers to use owned, authenticated domains.
The rollout is phased by Exchange seat count:
Microsoft says tenants will receive Message Center notices one month before their stage begins. Plan on the basis that you may not see or act on that reminder in time.
The Exchange Team is explicit. MOERA is fine for testing. It is the wrong choice for production email. Abuse from new tenants harms the shared reputation of onmicrosoft, so Microsoft is limiting the number of external recipients and advising every customer to move outbound email to a custom domain.
This sits alongside a wider tightening of outbound controls in Microsoft 365:
Tenant wide external recipient rate limit. In February 2025, Microsoft announced a new tenant wide cap on external recipients per day, separate from per mailbox limits. It is designed to frustrate abuse at scale and to stop bad actors spreading sends across many accounts. Microsoft’s post and independent analysis from Practical 365 explain the model and the impact.
Outlook high volume sender requirements. In April and May 2025, Microsoft set new requirements for domains that send more than 5,000 messages per day to Outlook.com addresses. SPF, DKIM and DMARC are mandatory, with non compliant traffic first routed to Junk then rejected with error 550 5.7.515. The Microsoft Defender for Office 365 blog has the canonical guidance.
The direction of travel is clear. Better authentication, better hygiene, and better accountability for anyone who sends email at scale. The MOERA throttle does not replace those controls. It complements them by closing off a shared identity that was never meant for production.
If you already send all external mail from a custom domain that you own and authenticate correctly, you will barely notice the MOERA throttle. If any workflow still sends from onmicrosoft.com, you will.
Beyond obvious cases where small firms and public bodies never moved beyond the default address, there are platform features and integration patterns that can fall back to MOERA when your default domain is still set to it. Microsoft calls out several scenarios in its post:
These are the flows that will hit the wall first because they can lurk under the surface. A service owner may believe that everything uses the corporate domain, while a built in feature still relies on MOERA behind the scenes.
Set your default domain to your custom domain
If your tenant still uses the MOERA variant as the default, change it. Make your owned domain the default so the platform and its services pick it up by design. Microsoft documents how to select the domain used by Microsoft 365 product emails.
Move primary SMTP addresses to your custom domain
Users and shared mailboxes should send from your corporate domain. Changing the primary SMTP can affect the username used for sign in in environments where the UPN equals the primary SMTP, so schedule, communicate and support the change. The Exchange Team flag this impact in the announcement.
Audit actual MOERA usage with Message Trace
Use Message Trace in the Exchange Admin Center to filter senders that match your MOERA wildcard. Pull a 90 day view, filter out internal recipients, then sort by sender and volume. This reveals the systems and patterns to fix before your stage begins. Microsoft gives this exact approach.
Reconfigure Microsoft 365 products to use your domain
Set Microsoft 365 products to send from your domain where supported. It removes reliance on generic product addresses and MOERA fallbacks and makes notifications look like they come from you.
Harden your domain and align identity
If you send at scale, Outlook’s requirements make SPF, DKIM and DMARC non negotiable. In truth, every sender benefits from correct alignment. It protects your brand and helps your email land where it belongs.
Plan the edge cases
Check Bookings configuration, SRS behaviour and hybrid routing. Verify that journaling is excluded and that postmaster and abuse addresses are set sensibly. The Exchange Team’s call outs are a practical checklist.
Start with Message Trace. Set the sender to *@*.onmicrosoft.com
. Pull a three month window so you catch weekly and monthly cycles. Export and filter to external domains. Work the list:
Each category has a fix. Most are straightforward and low cost. The trick is to uncover them before the throttle lands.
Look at the MOERA throttle alongside the other 2025 changes.
The tenant wide external recipient rate limit restricts the total number of external recipients a tenant can reach in a day, regardless of how many accounts you spread the send across. It is designed to frustrate abuse and stop people treating Microsoft 365 as a bulk sending engine. The official announcement and community analysis are clear on intent and mechanics.
At the same time, Outlook high volume sender rules began to enforce basic authentication hygiene for bulk senders to Outlook.com. Fail SPF, DKIM and DMARC and your messages first go to Junk, then risk rejection as enforcement tightens. The bar is higher and the documentation is public.
The MOERA throttle is another piece of that puzzle. It is not a standalone fix, it is a nudge toward owned identity and modern authentication.
Shared domains suffer from the weakest participant. That is the root of the MOERA problem. If a hundred new tenants behave well and five abuse the namespace, the shared reputation for the onmicrosoft family suffers. Filters reflect that reality. A cap on the number of external recipients from MOERA addresses is a blunt but effective way to reduce the threat surface and to steer customers toward owning their identity.
There is a brand and trust element beyond pure deliverability. Email that arrives from a corporate domain that you control and authenticate is part of your public identity. In sectors like financial services, healthcare and central government, where citizens and customers are rightly cautious of anything that looks automated, a note from a product no reply address or from MOERA can undermine trust and increase the chance of being flagged as suspicious. The policy change will force a higher baseline and bring long ignored settings work to the top of the pile.
For the public sector and for schools, the alignment with central guidance is natural. Own your domain. Authenticate it properly. Make systems speak with one voice. The throttle is likely to flush out configuration debt in education, in local authorities and across the third sector where day one settings were never revisited. The cost to fix is low compared with the cost of deliverability problems and the reputational damage of being flagged as spam.
Step one. Inventory
Run Message Trace and identify every flow that sends from a MOERA address to the outside world. Classify by owner and confirm volumes.
Step two. Fix the defaults
Make your custom domain the default. Create or verify the required DNS. Confirm SPF. Set up DKIM. Publish DMARC with a monitoring policy if you are not ready for a strict reject policy. Move primary SMTP addresses for users and shared mailboxes across to the corporate domain. Communicate the change and support teams that have saved credentials.
Step three. Reconfigure product notifications
Use the admin setting to make Microsoft 365 products send from your domain rather than from product brands. This cleans up the look and avoids MOERA fallbacks.
Step four. Tidy the edges
Check Bookings, SRS and hybrid scenarios. Confirm journaling behaviour. Fix anything that still uses MOERA for outbound.
Step five. Prove it with tests
Send to a diverse set of external recipients. Check headers to confirm the right domain is in use, that DKIM is signing with your domain and that DMARC alignment is correct.
Step six. Set guard rails
If you operate a large tenant, consider transport rules that block or flag any attempt to send externally from a MOERA address. Add monitoring to catch regressions. Treat any new system that wants to send externally as a change that requires a domain and authentication review.
If a team is using a user mailbox or a shared mailbox to run outreach at scale, they will already have seen trouble with per mailbox limits and with the tenant wide external cap. The MOERA throttle is another layer. The right answer is not to fight the platform. The right answer is to move bulk send to a dedicated provider that is designed for that purpose, operates within the law and is configured with your domain, your authentication and your consent model. Microsoft 365 is for business communication, not for bulk campaigns. The official guidance is to use Azure Communication Services Email if you must exceed Exchange Online limits.
There are still on premises applications that speak to the world through a relay and that were configured years ago to use a MOERA identity. The fix is the same. Change the sender to a custom domain and authenticate it. If you run hybrid, review the path the messages take and ensure the stamp on the outside is your domain with DKIM signing and DMARC alignment. If the system truly cannot be modernised, consider a relay service that supports your authentication model and is configured with your domain. Do not accept MOERA as an excuse. The throttle turns it from a poor choice into a hard limit.
Inbound mail is not affected. The cap applies only to external recipients on outbound mail from a MOERA sender. Journaling reports use the Microsoft Exchange Recipient address and are excluded. Hybrid out of office edge cases that involve mail.onmicrosoft.com are not throttled so long as MOERA is not used for the original send. If your environment uses federated domains for sign in, you will still need a non federated custom domain in the tenant to act as the default domain. The announcement covers all of these points.
The most useful outcome of this change may be the conversation it forces between IT, security and service owners. Email identity is an organisation wide asset. It deserves a clear policy and a change gate. If a team wants to send externally, they should do it under the corporate domain, with proper authentication and with accountable ownership. The MOERA throttle will flush out shadow IT email patterns because they will simply stop working at scale. Use that moment to consolidate control rather than to grant exceptions.
For boards and senior leaders, the question is straightforward. Do we control the identity that speaks for us. If the answer is not an immediate yes, the Microsoft changes are a timely prompt to fix it.
Microsoft’s decision to throttle outbound email from onmicrosoft.com is not a surprise. Shared domains are a magnet for abuse. The change is pragmatic and overdue. It will frustrate spammers, frustrate poor outreach practices and nudge every customer, large and small, toward owning and authenticating their own domain.
For UK organisations, the work to adapt should be measured in days and weeks, not in months. The steps are clear. Set your default domain. Move primary SMTPs. Repoint product notifications. Tighten authentication. Sweep for edge cases. Prove it with tests. Put guard rails in place so it stays fixed.
Do this now and you will not notice the throttle when your stage arrives. Leave it until the Message Center reminder and you will be fixing production problems under time pressure. The technology is straightforward. The leadership ask is even simpler. Make your organisation speak with its own voice, every time, to everyone.
What is your take. Will this throttle quietly lift deliverability for good actors, or will it expose more configuration debt than teams expect.
Let’s share the good, the bad and the messy middle. What broke in testing, what was easy to fix, and what still needs better guidance from Microsoft.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-08-22
The Bouygues Telecom breach affecting 6.4 million customers is only one of a series of incidents exposing the fragility of telecoms worldwide. From the UK to the US, from South Korea to Australia, attackers are exploiting the industry’s unique role as both infrastructure and data custodian.
Image credit: Created for TheCIO.uk by ChatGPT
When Bouygues Telecom confirmed on 4 August that hackers had accessed the data of more than 6.4 million customers, the disclosure landed as another chapter in what has become a troubling series of incidents across the global telecommunications sector. On the surface, the French operator provided reassurances: bank card numbers and passwords were untouched, the immediate intrusion was blocked, and national authorities had been informed. Yet the details that did emerge carried significant weight. Contact information, contractual data, civil status records and IBANs had been exposed.
This combination of sensitive but not always headline-grabbing information illustrates the changing nature of risk. The obvious impact may not come through empty bank accounts the following morning. Instead, it will be the gradual build-up of risk as criminal groups and fraudsters recycle, combine and weaponise personal data for targeted phishing, impersonation and more sophisticated forms of fraud. For a provider of Bouygues’ scale, which services nearly 27 million mobile customers, the breach is both a national issue and part of a global story about how vulnerable our communications infrastructure has become.
Telecommunications firms occupy a peculiar position in the cyber security landscape. They are both the providers of connectivity and the guardians of customer data on an extraordinary scale. Unlike financial services firms, which operate in a tightly regulated environment with constant scrutiny from central banks and regulators, telecoms companies have historically had more leeway. They are critical infrastructure, yet they do not always carry the same level of oversight as banks or national utilities.
That imbalance is increasingly being exploited. In Europe alone, Orange Belgium reported a breach in July that exposed the data of 850,000 customers, including SIM card details and PUK codes. Though passwords and financial information were unaffected, the stolen details are enough to enable SIM-swap fraud or social engineering attacks on unsuspecting individuals.
In the United Kingdom, Colt Technology Services was forced to take systems offline in August after attackers stole several hundred gigabytes of data by exploiting a vulnerability in SharePoint. The breach affected internal systems and led to the temporary suspension of customer-facing services. For a company serving multinational clients across data, voice and cloud, the disruption and reputational harm were immediate.
These incidents do not exist in isolation. They form part of a wider trend in which attackers have increasingly targeted telecom providers as repositories of both data and influence.
Half a world away, South Korea’s largest mobile operator, SK Telecom, has been forced into a period of unprecedented introspection. Earlier this year it admitted that attackers had compromised critical USIM authentication data, which underpins how phones connect securely to networks. Regulators fined the company and ordered sweeping reforms, including a multi-year investment programme to overhaul security.
The scale of the breach was staggering. More than 23 million subscriber records were implicated, involving unique identifiers such as IMSI and IMEI codes that are deeply embedded in how devices authenticate themselves. This was not just another case of exposed email addresses. It was a compromise that cuts to the technical fabric of the network itself.
In a different but related case, South Korean investigators revealed that high-profile celebrities and business leaders had been targeted through telecom website breaches, with attackers aiming to hijack access to bank and cryptocurrency accounts. The inclusion of public figures such as K-pop star Jungkook in the narrative underscores how breaches of telecom infrastructure reverberate far beyond corporate boardrooms.
In the United States, the picture is more complex and arguably more alarming. On one level, consumer data breaches continue to generate lawsuits and settlements. AT&T is still reeling from a 2024 breach that exposed information from more than 86 million customers. A proposed settlement of 177 million dollars has been floated, which could provide individual compensation of up to 7,500 dollars per person. This financial dimension is familiar territory for observers of American class action law.
But beneath the surface there is a more strategic threat. Intelligence reports and investigative journalism have linked state-sponsored groups, including a Chinese-affiliated cluster known as Salt Typhoon, to intrusions at several major US telecom firms. Unlike criminal ransomware groups seeking ransom payments, these operations have targeted metadata, surveillance systems and even call recordings of government officials. Such campaigns are not about quick profits. They are about intelligence, influence and in some cases preparing the ground for potential disruption in times of geopolitical tension.
The line between criminal cyber operations and state-linked espionage is becoming harder to draw. Where Bouygues Telecom and Orange Belgium may primarily be grappling with criminal data theft, their counterparts in the United States are facing sustained campaigns designed to undermine national security. Yet both phenomena emerge from the same underlying truth: telecoms firms are now in the crosshairs.
In August, TPG Telecom’s iiNet division disclosed that 280,000 customer accounts had been exposed after attackers used stolen employee credentials to access an internal system. The details included email addresses, phone numbers and, in some cases, modem setup passwords. As with the Bouygues incident, the company emphasised that financial records and identity documents were not part of the breach. Yet customers will remain at heightened risk of fraud attempts, while regulators will be asking whether authentication systems for employees are truly fit for purpose.
Australia has already endured a series of high-profile breaches across healthcare and retail sectors. The iiNet incident signals that telecoms are no less exposed, and that the broader Asia-Pacific region is facing the same intensifying wave of attacks that has swept across Europe and North America.
Part of the answer lies in the nature of the data itself. Even when financial details are excluded, telecoms firms hold information that can be leveraged for fraud and surveillance. Contact details, SIM data, call records and authentication identifiers are valuable in themselves and even more so when combined with data from other breaches.
Another factor is the role of telecoms as infrastructure. A breach at a single provider can have a cascading effect across multiple sectors, from emergency services to online banking. The 2023 attack on Kyivstar in Ukraine demonstrated the point with brutal clarity. Attributed to a Russian military hacking group, the attack disrupted not only mobile and internet services but also national air raid warning systems at the height of missile attacks. The financial and operational costs were estimated at 90 million dollars, but the strategic implications went far deeper.
Attackers understand that telecoms firms are not merely businesses. They are arteries through which national life flows. That makes them uniquely valuable and uniquely vulnerable.
The regulatory landscape is evolving, though often unevenly. In France, the national data regulator CNIL and the cyber security agency ANSSI are involved in overseeing Bouygues’ response. In South Korea, the regulator imposed fines and demanded structural reform at SK Telecom. In the United States, consumer lawsuits and settlements continue to shape the landscape, while intelligence agencies take a lead on the espionage dimension.
For UK firms such as Colt, the regulatory burden lies partly with the Information Commissioner’s Office, but also with national security bodies tasked with protecting critical infrastructure. Each jurisdiction has its own emphasis, yet the common theme is that regulators are under pressure to hold providers accountable and to prevent complacency.
One of the most striking lessons from recent incidents is how telecoms boards and executives are now forced to treat cyber security as a front-line issue rather than a back-office function. Customer trust, national security, regulatory fines and legal liabilities all converge on the same point. A data breach is no longer a technical mishap. It is a governance crisis.
Boards are also grappling with how to fund and prioritise cyber resilience in organisations that already operate with thin margins in competitive markets. Shareholders demand returns, customers demand lower prices, and regulators demand security. Balancing these demands requires leadership willing to make difficult trade-offs.
Although the breaches discussed involve telecom providers, the implications for the UK financial sector should not be underestimated. Banks and insurers rely on telecom networks for everything from two-factor authentication via SMS to secure voice communications. If customer data from a telecom breach is recycled into targeted phishing campaigns, financial firms are often the next victims.
There is also a dependency dimension. If a telecom operator suffers prolonged disruption, as Kyivstar did in Ukraine, financial transactions and trading platforms may be directly affected. The resilience of financial services cannot be separated from the resilience of the communications sector.
From France to South Korea, from the United States to Australia, the pattern is consistent. Telecoms firms are struggling with a surge of cyber incidents that vary in detail but converge in meaning. They reveal weaknesses in authentication, in patching, in monitoring, and sometimes in culture. They highlight the growing intersection of criminal profit-seeking and state-linked espionage.
The lesson for executives across all sectors is that no company can assume immunity. The details of what is stolen may differ, but the strategic impact is the same. Breaches erode trust, invite regulatory scrutiny, and create fertile ground for future attacks.
The Bouygues breach is not just a French problem. It is part of a mosaic that spans continents and industries. The attackers may vary in sophistication, and the data may differ in sensitivity, but the direction of travel is clear. Telecommunications firms are now a frontline target in the global cyber conflict.
For customers, the practical advice remains familiar: be alert to phishing, scrutinise messages that request financial or personal details, and recognise that even partial data leaks can have real consequences. For executives and policymakers, the message is sterner. Telecoms are critical infrastructure, and breaches in this sector carry risks that go well beyond the balance sheet.
The global picture is one of rising stakes, where every breach erodes not just the privacy of individuals but the resilience of national economies and public safety. Bouygues Telecom may be the latest name in the headlines, but it will not be the last. The true test is whether the sector can learn from these incidents quickly enough to prevent the next crisis.
What’s your take? Do you think telecoms are prepared to meet the challenge of rising cyber threats, or are we only at the beginning of a much larger crisis?
2025-08-20
The Workday data breach highlights the growing reliance on social engineering tactics, exposing vulnerabilities in enterprise CRM systems and sending ripples across industries including the UK financial sector.
Image credit: Created for TheCIO.uk by ChatGPT
On 18 August 2025 Workday disclosed a data breach following a social engineering attack that compromised a third party customer relationship management platform. The breach, part of a wider campaign targeting Salesforce CRM environments, saw threat actors access business contact information such as names, phone numbers and email addresses. Customer tenant data was not involved.
This incident joins a series of attacks that have ensnared some of the world’s most recognisable brands including Google, Adidas, Qantas, Dior, Chanel and Louis Vuitton. It exemplifies the growing menace of social engineering attacks on enterprise systems, particularly those relying on CRM tools. In this article I explore the unfolding narrative, the threat landscape, the response from security professionals and the ripple effects across corporate Britain, including the UK financial sector. The latter is not the main focus but a significant note of concern.
Workday, a Californian HR software giant with more than 19,300 employees and serving over 11,000 organisations including over 60 per cent of Fortune 500 firms, announced that the breach was detected on 6 August, nearly two weeks before their public disclosure.
In a blog post reported by BleepingComputer, Workday admitted that threat actors accessed “some information” from a third party CRM platform used in their systems. They emphasised there was no evidence customer tenant data or internal user files had been compromised.
The exposed data comprised primarily business contact information such as names, phone numbers and emails. While not highly sensitive, such information is valuable for phishing and social engineering campaigns.
Workday cautioned users against unsolicited communications. They clarified they would never request passwords or sensitive details via phone, and stressed that all official correspondence uses trusted support channels .
Experts link Workday’s breach to a broader wave of attacks targeting Salesforce based systems. Groups such as ShinyHunters, also known as UNC6040, are behind a campaign involving vishing and phishing tactics to drive victims into installing malicious OAuth connected applications in their Salesforce environments.
Attackers impersonate internal HR or IT staff via phone or text, tricking employees into approving these apps. Once installed, threat actors access records, extract data and may attempt extortion via a data leak site.
Google, for example, noted that the attack involved a fake version of Salesforce’s Data Loader app which prompted a user to grant access that allowed data exfiltration.
This method has proved highly effective and alarmingly simple, with a growing number of enterprises falling victim. Thomas Richards of Black Duck noted this trend is deeply concerning, especially when attackers resort to painstaking social engineering because conventional methods may be failing ).
Workday responded by severing access to the compromised CRM platform, introducing enhanced security protocols and reinforcing internal employee defences.
Salesforce customers have been advised to audit connected apps, revoke unfamiliar permissions, implement stricter access controls and enforce multi factor authentication .
William Wright, CEO of Closed Door Security, urged organisations to train employees, limit privileges and apply MFA universally. Kevin Marriott at Immersive likewise warned that even minimal exposure such as names or email addresses can fuel sophisticated phishing campaigns.
This breach underscores a painful reality. The weakest link in many cyber defences lies not in hardware or software vulnerabilities but in human trust. Social engineering plays on our willingness to help and our assumptions about authority.
Enterprise security must adapt. Cyber teams must extend beyond technical controls to reinforce employee awareness, simulate phishing exercises and nurture a culture where refusal to comply with anomalous requests is accepted, not penalised.
Reliance on cloud based tools such as Salesforce makes the entire enterprise surface vulnerable. A single misstep like authorising a rogue OAuth app can permit attackers to harvest data across multiple customers without directly attacking core systems.
The UK financial sector boasts mature cyber defences and a keen regulatory focus. Yet this incident is a warning bell rather than an immediate crisis.
Many financial organisations rely on platforms such as Workday for HR functions, often integrated with CRM systems and third party tools. Should contact or staff details be exposed, adversaries could launch highly targeted phishing efforts. An email or text appearing to come from HR could lure an executive into compromising sensitive systems.
The regulatory landscape, including guidance from the Financial Conduct Authority and the Bank of England, demands robust governance over third party risk. This means assessing supply chain vulnerabilities and ensuring that external tools are subject to strict access controls and incident response plans.
Financial institutions in the UK should take this as a signal to revalidate policies around CRM integrations, vendor access and employee training. Zero trust network models, segmented privileges for auxiliary systems, regular penetration testing and enhanced incident detection protocols are all critical.
The implications for UK finance are notable but they are part of a larger context. This is a global phenomenon affecting every sector that uses cloud based environments and external platforms to manage data and employees.
Audit and harden systems. Conduct thorough reviews of OAuth connected applications, especially those tied to CRM systems. Remove unused apps and restrict the ability of employees to install them without authorisation.
Educate and simulate. Launch simulations of vishing and phishing attacks that emulate real world tactics, training employees to question unsolicited communications even if they appear trusted.
Enforce MFA and monitoring. Require multi factor authentication on all access points to critical systems, especially cloud platforms. Monitor logs for anomalous activity and unusual data exports.
Strengthen third party oversight. Expand contracts with cloud vendors to include breach notification clauses, access reviews and shared responsibility for security audits.
Responsive governance. Create review boards involving security, HR, legal and executive teams tasked with rapid incident response protocols including public communications.
Scenario planning. Embed social engineering scenarios into risk assessments. What if insider impersonation leads to credential theft? How quickly can systems isolate and block such activity?
The Workday breach revealed on 18 August 2025 is a significant event in the evolving landscape of enterprise cybersecurity. Conducted via social engineering of employees to compromise a CRM platform, it exposed business contact data and mirrors a wider assault against Salesforce systems worldwide.
The incident is a reminder that technology alone is not enough. In the age of sophisticated phishing and vishing, building human resilience is as important as firewalls and encryption. Organisations must combine strict technical defences with continuous employee training and a culture of scepticism.
For the UK financial sector the incident adds urgency. Verify that third party systems are secured, ensure staff remain vigilant, and confirm that incident response is rapid. Across all industries the lesson is universal. Threat actors exploit trust, and security must guard well beyond the perimeter.
The true battleground is within daily interactions. A simple call or message, if handled carelessly, can open the door to a major breach. A moment’s hesitation, however, may prevent it.
What’s your take? How should enterprises strengthen resilience against social engineering in a cloud dominated environment?
2025-08-18
QR codes are being weaponised in plain sight, and most people don’t even realise it. Here’s how attackers use them, why they work so well, and what we can do to defend against them.
Image credit: Created for TheCIO.uk
QR codes are everywhere. They’re in cafes, on desks, in meeting rooms and on posters at train stations. They speed up onboarding, bring up menus, and allow frictionless access to just about anything.
But they’re also being weaponised.
In the push toward mobile-first interaction, we’ve handed over a silent, scannable attack vector to cyber criminals, and most people don’t even realise they’re at risk.
In one of my cyber security awareness sessions, I left a flyer with a QR code on it laying in the publicly accessible reception and conference room we were using. No instructions, no description. Just a scannable square and a short headline:
Scan this to enter the draw for a prize.
Most people didn’t hesitate.
A few seconds later, they’d landed on a realistic, branded webpage at thecio.uk/dodgy-qr. It was harmless, a training tool, nothing more, but it proved the point. Almost everyone scanned the code without asking where it came from, where it pointed to, or who placed it there.
They did it because it looked official. Because it was printed on nice paper.
This is precisely the type of logic real attackers exploit.
QR phishing (or “quishing”) doesn’t require a hacked server or social engineering over email. It only needs one thing: your camera.
Here’s what makes it dangerous:
And with a little bit of polish, anyone can design a fake feedback form, Wi-Fi registration page, HR onboarding form or benefits login screen that looks plausible — especially when it loads instantly on your phone.
Here are three practical scenarios where QR-based phishing has shown up in the wild, and in simulations I’ve run directly:
An attacker places a QR code sticker over a legitimate one — outside an event, meeting room or building lobby. It leads to a login prompt resembling a corporate Microsoft 365 login. Users enter credentials to “check in”.
Except the credentials are now in someone else’s hands.
Disguised as a harmless employee engagement survey, this QR leads to a fake HR portal. Users are asked to enter their name and email to participate, then receive a prompt to verify their identity by logging in.
Behind the scenes, it’s a credential harvesting operation.
Sent via email or posted in a building, this QR code claims your mobile access certificate is about to expire. It links to a page that mimics a security team portal, asking users to re-enter MFA details or install a profile.
Suddenly, the attacker has control over push notifications or device-level settings.
You don’t need to ban QR codes altogether. But you do need to train people to treat them with caution, just like suspicious links in emails.
Here’s how to reduce the risk:
The goal isn’t to catch people out. It’s to build a moment of pause, a second thought, before that tap.
Phishing isn’t just in your inbox anymore. It’s on walls, mugs, desks and badges. It hides behind convenience and branding. And it only takes one careless scan to open the door.
As cyber professionals, we need to start treating the physical space, not just the digital one, as part of the attack surface. If something feels too seamless to be secure, it probably is.
Train your teams to look before they scan.
Ben Meyer is a senior IT and cybersecurity leader with a decade of experience spanning scale‑ups, global enterprises and the public sector (contract). He specialises in modernising IT operations, implementing cloud platforms, and embedding security and compliance frameworks aligned with ISO 27001 and GDPR. Ben has partnered with startups, charities and high-growth organisations to deliver pragmatic, people‑centred technology strategies. He’s held leadership roles across IT support, infrastructure and cyber, and is known for building reliable systems with a clear focus on risk, documentation and operational maturity.
2025-08-11
The Clorox breach in the US and the M&S cyber incident in the UK show how attackers can bypass sophisticated defences simply by calling the help desk. For UK IT leaders, the warning could not be clearer.
Image credit: Created for TheCIO.uk by ChatGPT
The breach every IT leader fears often looks the same in the imagination. A nation-state-grade exploit. A shadowy attacker inside the network for months, extracting terabytes of data. A ransomware detonation at 3am, encrypting everything from payroll to production.
The reality, as two recent incidents on opposite sides of the Atlantic prove, is far more prosaic. And for that reason, far more dangerous.
Sometimes the attacker does not need to break into your network at all. They simply pick up the phone, and someone lets them in.
That is the allegation in a lawsuit now making headlines in the United States. Clorox, one of the most recognisable consumer goods companies in the world, is suing its IT services provider, Cognizant, claiming that a help desk technician repeatedly reset passwords and bypassed multi-factor authentication for an attacker impersonating a Clorox employee. Those actions, Clorox says, opened the door to one of the most disruptive breaches in its history, halting production and distribution and costing an estimated $380 million, close to £300 million.
To the British IT leader, this might sound like a distant drama across the pond. But the implications are chillingly local. Because what happened in Atlanta could just as easily happen in Aberdeen, or Ashford, or Acton. UK enterprises are no less reliant on third-party IT providers. And in many cases, they are even more exposed due to resource constraints, fragmented oversight, and legacy thinking about accountability.
The method was devastatingly simple. No zero-day vulnerabilities. No malware with a Hollywood backstory. Just persistence, confidence, and a support process that trusted the caller.
According to court filings, the attacker, allegedly a member of the Scattered Spider hacking group, contacted the Cognizant-run help desk posing as a Clorox employee locked out of their account. Over a series of calls, the help desk granted their requests: passwords were reset, MFA challenges were removed or circumvented, and the attacker was issued fresh, valid credentials.
With those credentials, the attacker walked straight past the organisation’s perimeter defences. Within days, manufacturing systems stalled. Distribution lines were disrupted. Orders could not be fulfilled. The breach became a shareholder issue, a media story, and a costly operational crisis.
This was not an advanced technical compromise. It was a social engineering campaign, and a highly effective one. Which is why it should be keeping UK IT leaders awake at night. Because we have already seen the same playbook here.
On Saturday 19 April 2025, while much of the UK was preoccupied with the long Easter weekend, Marks & Spencer began to suffer a series of unexplained outages. In-store contactless payments failed. Click-and-collect orders could not be processed. Customers complained on social media, reporting abandoned baskets and frozen tills.
Three days later, M&S confirmed publicly that it was dealing with a major cyber incident. By Friday 25 April, the situation had escalated: the retailer suspended online ordering for its clothing and home ranges entirely.
Behind the scenes, investigators traced the breach back to a supplier. The attacker had not found an unpatched server or stolen a database backup. They had gained entry through a third-party help desk by convincing support staff that they were a legitimate M&S employee in need of a reset.
Tata Consultancy Services, which provides IT help desk services to M&S, was named in multiple press reports as the possible supplier in question, though M&S has never officially confirmed this. What is certain is that the breach was a case of social engineering, not a technical exploit.
The damage was sustained. Online orders in Great Britain only resumed, partially, on 10 June, nearly two months later. M&S has warned investors that the incident will reduce profits by up to £300 million. Analysts estimate the company’s market value dropped by over £700 million in the days following disclosure.
Nor was M&S the only target. The Co-op suffered disruptions to contactless payments and store operations. Harrods was also reported to have experienced issues linked to similar methods. The National Cyber Security Centre responded by issuing urgent guidance to retailers: review your help desk verification procedures immediately.
The Clorox and M&S breaches have a common DNA. Both began with a phone call to a help desk. Both succeeded because the agent trusted the caller’s identity. Both involved resetting credentials that became the keys to an operational meltdown.
In both cases, the breach did not hinge on the sophistication of the attacker’s technical tools. It depended entirely on the vulnerability of human process, a process that exists in almost every UK enterprise today.
And therein lies the problem. Most organisations have designed their service desks for efficiency and customer satisfaction. The performance metrics are clear: average handling time, first-call resolution, ticket closure rates. These KPIs incentivise agents to move quickly and keep the caller happy. None of them reward taking extra time to interrogate a request or escalate a reset for further verification.
For attackers, this is an open invitation.
In the US, Clorox’s case against Cognizant is shaping up to be a precedent-setter. Clorox alleges breach of contract, negligence, and mishandled incident response. Cognizant rejects the claims, maintaining that it provided only a limited service and was not responsible for Clorox’s wider security posture.
For UK IT leaders, this should trigger a review of every supplier agreement in your portfolio. The UK legal and regulatory environment leaves no safe harbour in “the vendor did it” excuses.
The Information Commissioner’s Office has repeatedly stated that both data controllers and processors are responsible for implementing “appropriate technical and organisational measures”. This year, the ICO fined a software supplier directly for security failures that led to a breach, even though that supplier was operating under contract to another company.
For financial services and other regulated sectors, the PRA’s Supervisory Statement SS2/21 and the FCA’s operational resilience rules impose specific obligations on outsourcing and third-party risk management. These include contractual rights to audit suppliers, requirements to test controls, and clear exit strategies if a supplier cannot meet security expectations.
The NCSC’s post-Easter guidance to UK retailers could not have been clearer: if your help desk can reset credentials without rigorous verification, you are vulnerable. If that help desk belongs to a supplier, you are still responsible.
Help desk staff are not careless or unprofessional. The reality is that they operate in high-pressure environments with multiple, often conflicting demands: resolve the issue quickly, keep the caller satisfied, minimise ticket backlog. In outsourced arrangements, the person handling the reset may be thousands of miles away, several contractual layers removed from the company whose systems they are accessing. Their scripts may be outdated, their training generic, and their understanding of the client’s risk environment minimal.
Groups like Scattered Spider specialise in exploiting this gap. They study corporate structures, learn the terminology of internal projects, and mimic the tone of a stressed but important employee. They often have partial information from previous breaches, names, job titles, office locations, to make their impersonation convincing. Once on the call, they present a plausible story and a sense of urgency, and more often than not, the reset is granted.
For years, the industry has talked about “Zero Trust” as the solution to modern cyber threats. But these breaches expose its most glaring blind spot, the human interface.
If your help desk can reset a password or bypass MFA without watertight verification, your Zero Trust model is compromised before it has even begun to work. The sophistication of your endpoint detection or your cloud security controls is irrelevant if the front door is opened by someone trying to be helpful.
This is not a technology problem. It is a process problem, and by extension, a leadership problem.
The answer is not another layer of software or a shiny security dashboard. It is a cultural and procedural reset.
Identity verification must be treated as a security-critical control, not an administrative step. That means clear policies that no credential reset occurs without robust, independent verification, and that no urgent business request overrides that policy. It means empowering agents to say “no” when verification fails, and protecting them from performance penalties for doing so.
Boards need to treat help desk risk as a strategic issue. If a supplier’s help desk can grant access to your systems, then the liability, legal, financial, and reputational, belongs to you. That requires regular audits of help desk processes, shadowing of live calls, and commissioning of unannounced social engineering tests. It also means engaging with suppliers to ensure they have the training, processes, and contractual obligations to resist manipulation.
The most unsettling aspect of the Clorox case is the likelihood that the technician involved believed they were doing the right thing. They were following the script. They were solving a problem for someone they thought was a colleague. The process said “yes”, so they said “yes”.
This is what makes the help desk such an effective attack surface. It is not malice. It is misaligned incentives. Procedure without context. And unless IT leaders address that, the breaches will continue.
If you lead technology in a UK organisation, the clock is ticking.
First, map the access that your help desks, internal and outsourced, actually have. Not what the contract says, but what the agents can do. Then, test the process yourself. Call the desk as “you” from an unrecognised number. See what happens.
Engage your suppliers. Demand to know their verification process in detail. Ask how they train staff on social engineering. Ask when they last failed a test, and what changed as a result. If the answers are vague or defensive, you have a problem.
Work with your board to make help desk compromise a recognised strategic risk. That means measurable oversight, not vague assurances. Insist that social engineering testing is part of your assurance programme. Review contracts and add language that gives you audit rights, testing rights, and the ability to demand remediation.
Finally, remember that this is as much about culture as controls. Build an environment where an agent feels rewarded for stopping a suspicious reset, even if it means telling a genuine senior executive to wait. Because the only thing more costly than slowing down an access request is speeding it up for an attacker.
Could someone call your help desk today, convincingly impersonate an employee, and obtain valid access to your systems?
If the answer is anything other than “impossible”, you are not ready.
The attackers have already shown us their playbook. They have shown it in Atlanta. They have shown it in London. And they will keep showing it, until we change the rules of the game.
What’s your take? Could your help desk stand up to a determined and convincing attacker armed with only a phone and a story?
Let’s share the good, the bad and the messy middle when it comes to securing the human layer of our cyber defences.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-08-04
A deeper look into Muddled Libra’s modular team structure, AI-enabled deception, ransomware partnerships, and the defences organisations need now.
Image credit: Created for TheCIO.uk by ChatGPT
In July 2025, Unit 42 published its landmark assessment entitled Muddled Libra Threat Assessment: Further Reaching, Faster, More Impactful. It captured a dramatic evolution of the adversary formerly known to many as Scattered Spider or UNC3944. Organisations across government, retail, insurance and aviation have now been forced to confront a threat actor with unprecedented speed, agility and destructive potential. This article brings missing intelligence into the conversation by profiling the modular structure of the threat actor, their partnerships with ransomware-as-a-service providers, their advanced use of artificial intelligence and voice deepfakes, and the critical set of recommended defensive controls.
The aim here is to move beyond awareness to action. IT and business leaders must see Muddled Libra not as a distant menace but as a sophisticated adversary that threatens infrastructure, core operations and digital resilience.
Unit 42 describes Muddled Libra as operating in a decentralised, modular fashion. Rather than a monolithic gang, the adversary is made up of specialised sub‑teams that function like small enterprises. One cell may focus on reconnaissance and victim profiling, another on call‑centre based vishing, yet another on endpoint lateral movement or ransomware deployment. This modular structure creates resilience: if one part of the network is disrupted, others continue operations unabated. It also enables a playbook operating at scale. Arrests in the UK in mid‑2024 reduced capacity temporarily, but the structure rebounded swiftly under new leadership. The law enforcement wins served as deterrence and capacity degradation, not elimination.
Another critical accelerant in Muddled Libra’s evolution has been formal partnerships with a variety of RaaS providers. Unit 42 identifies DragonForce (also known as Slippery Scorpius) as a key partner since April 2025, but the group also contracts with ALPHV (Ambitious Scorpius), Qilin (Spikey Scorpius), Play (Fiddling Scorpius), Akira (Howling Scorpius), and RansomHub (Spoiled Scorpius). Through these alliances, Muddled Libra has shifted beyond purely encrypting data to executing destruction of virtual infrastructure through legitimate management tools. In one documented case, VMs were deleted at scale using ESXi tooling, rendering backups ineffective and demanding ransom for restoration of cloud assets.
This evolution transforms the nature of extortion. Victims can no longer rely solely on backup restoration when infrastructure has been directly obliterated. The threat now extends into critical SaaS operations and cloud‑native environments.
Perhaps the most unsettling development is Muddled Libra’s adoption of artificial intelligence and deepfake voice technology to manipulate helpdesk staff and victims in real time. Unit 42 confirms that the group now generates voice clones using mere seconds of publicly available audio, such as from media interviews or earnings calls, to engineer vishing calls that sound convincingly like executives or IT staff. This capability converts the human firewall into an unreliable defence. Even vigilant teams cannot reliably distinguish synthetic voices from authentic ones.
Moreover, Muddled Libra leverages AI‑driven tools to automate reconnaissance. Large language models produce impeccably written phishing lures tailored to individuals based on scraped public profiles. Algorithms assemble hierarchical maps of target organisations, uncovering help desk escalation paths and authentication fallback vectors. As one expert summarised, layering in AI can elevate the number of victims from hundreds to tens of thousands in a single campaign. Such automation makes each intrusion dramatically more scalable.
With this tech‑augmented operational model, traditional training and awareness are not enough. The defence must be technical, procedural and behavioural, matching attacker sophistication rather than relying on hope that staff will recognise deception.
In 2025, Unit 42 tracked intrusion operations in four main sectors between January and July: government, retail, insurance and aviation. The group executed sequential campaigns across UK and US retailers in spring, then pivoted to US insurance firms in June, and by mid‑July was striking aviation clients both in the United Kingdom and North America. This organisational flexibility underlines their ability to shift campaign focus quickly while maintaining a consistent tactic deck centred on help‑desk vishing and credential resets.
Retail giants such as Marks & Spencer and Harrods in the UK were confirmed victims in attacks that led to data theft and ransom demands. Meanwhile in the insurance space, breaches such as that at Aflac emphasise that financial services are now firmly within their crosshairs. Aviation organisations including WestJet and Hawaiian Airlines publicly reported disruptions linked to Scattered Spider associated activity.
Muddled Libra’s tradecraft is deliberately designed to execute quickly, often before detection and response teams can react. According to Unit 42 intercepted incidents, the average time from initial access to containment was just one day, eight hours and forty‑three minutes. In some cases, the adversary escalated privileges to domain administrator within forty minutes of first contact. These operations typically commenced with vishing of a help desk agent, password and MFA reset, installation of legitimate remote management tooling, credential harvesting, lateral movement and eventual extortion deployment.
Such speed leaves little margin for error on the defensive side. Without cloud‑native monitoring and rapid conditional access enforcement, malicious activity can succeed before it is even observed.
Despite their modern sheen, Muddled Libra relies heavily on living off the land. They prefer to use existing legitimate remote monitoring and management (RMM) tools in target environments. Recorded tactics include the manipulation of remote tools such as AnyDesk, RustDesk, ConnectWise, Tailscale, Pulseway and more. They also abuse hypervisors, cloud management platforms and even EDR and endpoint agents to embed persistence and escalate access. When credentials are compromised, they harvest password vault data via NTDS.dit or Mimikatz, then leverage Microsoft 365 and SharePoint for internal reconnaissance and data exfiltration.
This strategic avoidance of custom malware enhances stealth, reduces detection probability, and expedites exploitation of systems already trusted by enterprise security.
The Unit 42 report emphasises the need for cohesive defensive strategy built around modern cloud identity, behavioural analytics, organisational readiness and process resilience.
Muddled Libra’s rise demonstrates that cybersecurity is no longer a technical domain alone. When organisations are hit with destructive ransomware operations that shortcut traditional recovery through infrastructure deletion, financial cost, litigation risk and trust damage multiply in severity. Public sector victims face service interruption; private sector leaders suffer stakeholder fallout. Cyber risk has therefore become a boardroom issue, not merely an IT one.
According to Unit 42, Muddled Libra will continue evolving along its current trajectory. Its modular structure means that even with arrests or takedown actions, new cells emerge quickly. The group’s cloud‑first mindset, coupled with RaaS partnerships, ensures it will refine its destructive capabilities over time. Organisations without visibility and control over cloud native infrastructure are vulnerable to escalated data theft, extortion and infrastructure denial.
At the same time, the automation enabled by AI means campaigns will become increasingly multi‑vector and global. Defenders should anticipate voice‑based social engineering across countries, languages and time zones. Standard awareness training will fail: adversaries already speak like your executives and know your org chart. Detection must move to machine speed.
Finally, information‑sharing efforts between public and private sectors remain vital.
For UK IT and business leaders, the imperative is clear. Now is the time to adopt proactive, coordinated strategies across identity, cloud access, detection capabilities and organisational readiness.
What’s your take? Are your helpdesk, access policies and exec team ready to counter real-time AI-driven voice phishing?
Let’s share the good, the bad and the messy middle of defending identity, trust and cloud-first infrastructure before the adversaries redefine our risk thresholds.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-07-28
Phishing remains the number one threat vector for organisations. Here's why user training still matters and what to do the moment someone clicks a malicious link.
Image credit: Created for TheCIO.uk by ChatGPT
Phishing remains the most persistent and damaging cyber threat facing organisations across the UK. Whether it comes through an unexpected email, a spoofed login page or a WhatsApp message purporting to be from IT, phishing succeeds not because of technical brilliance but because of human fallibility.
This makes phishing unique. Unlike a zero-day exploit or brute-force ransomware tool, phishing relies almost entirely on a person making a split-second decision to click. That decision can happen after a long day, a moment of distraction or out of misplaced trust in the message’s source. For security leaders, it creates the ultimate challenge: no control over the attacker’s timing and no guarantee of user behaviour.
The vulnerability is not a software bug. It’s a moment of inattention. It’s the absence of doubt when doubt is most needed. And because users are human, that vulnerability cannot be patched in a traditional sense. The risk has to be managed in a very different way.
The risk is real and escalating. In July 2025, the University of Hull was hit by a targeted phishing campaign that compromised 196 accounts in a matter of hours. The attackers used those accounts to send further scam messages and demand money from recipients. While the university’s response was fast, accounts were blocked and systems contained, the damage to operational continuity and trust was significant. Email and Microsoft Teams access was suspended for affected users, impacting daily workflows and teaching schedules.
The Hull incident serves as a clear reminder: phishing is not just a risk to individual credentials, it’s a threat to business continuity. Once attackers are inside a network, even for a short period, they can exploit trust, move laterally, and create reputational fallout that persists long after access has been restored.
That’s where phishing training earns its place. When done well, it raises baseline awareness, increases the chances of suspicious links being flagged and reduces the time between compromise and detection. But let’s be clear: training is not a firewall. It doesn’t prevent incidents. It buys you time. And when every minute counts after a compromise, that time is everything.
The first and most important goal of phishing training is to build muscle memory. Repetition and variation are key. Users need to be exposed to different types of messages... fake invoices, fake HR updates, fake calendar invites. Each scenario builds recognition and instinct. Over time, patterns emerge and users begin to question the unexpected.
Good training is not just about information. It is about simulation. Clicking on a link in a test environment is not a failure, it’s a teaching moment. And the more realistic those moments are, the more confident users become in the real world.
Equally important is building a culture where users aren’t punished for reporting clicks. If someone realises they’ve clicked a bad link, the clock starts ticking. The longer they stay quiet out of fear, the more damage an attacker can do. The best security cultures reward reporting. They treat a reported click as a win, because the alternative is silence.
This cultural shift is subtle but powerful. It means framing security as a team effort rather than a gatekeeping exercise. It means encouraging questions, not just issuing mandates. And it means celebrating when a user catches a phish, even if they did so after initially falling for it.
So what should happen when someone clicks?
First, isolate the user’s device. If your Endpoint Detection and Response (EDR) tool hasn’t already flagged the event, the IT or security team should disconnect the machine from the network to prevent further command-and-control traffic.
Next, identify what was accessed. Was it just a link? Did it request credentials? Was malware downloaded? Pull browser logs, check DNS traffic and review any log-in attempts from new IP addresses or devices.
Reset credentials and invalidate active sessions. If the phishing attempt was credential-harvesting, assume the password is already in the wrong hands. For organisations using Single Sign-On (SSO), this step is critical. Change the password, kill all sessions and monitor for reauthentication.
Let staff know what happened, what to look out for and what changes — if any — will be rolled out in response. The worst thing you can do is stay silent. People need context to stay vigilant.
The University of Hull, to its credit, handled the communication aspect better than most. Support centres were set up for in-person assistance. Affected users were updated through alternate channels. IT teams responded quickly to restore services. But even with a fast response, the fact that nearly 200 accounts were compromised shows how quickly phishing attacks can escalate inside an organisation without widespread vigilance.
A phishing link doesn’t need to deliver ransomware to cause chaos. Disrupted access to systems, broken trust in communications and the potential for follow-up fraud all create cascading effects. The downtime is real. The reputational damage is real. And the opportunity cost, lost teaching, delayed research, confused students, can linger for weeks.
Good phishing response is not about blame. It’s about speed, transparency and culture. When users are trained to spot red flags and know exactly what to do after clicking, the risk drops dramatically.
To build organisational resilience, leaders need to:
There is no such thing as a click-proof organisation. But there is such a thing as a resilient one. And resilience starts with preparation.
Has your organisation ever had to deal with a phishing click in real time? What saved the day, or what fell short?
Let’s share the good, the bad and the messy middle. The more openly we talk about failures and recoveries, the stronger our collective defences become.
2025-07-27
BBC Panorama's "Fighting Cyber Criminals" delivers a sobering reminder that cybercrime is no longer hypothetical – it's operational, scalable and happening daily. The attacks are sharper, the damage harder to reverse, and the response often muddled.
Image credit: Created for TheCIO.uk by ChatGPT
BBC Panorama’s latest investigation doesn’t so much break news as expose what most IT leaders already know. The attacks are already happening. They don’t come with warnings, or countdown clocks. They begin with a link, a guessable password or a cloned login page. The programme, Fighting Cyber Criminals, aired this month and laid bare the scale of what’s unfolding behind the firewalls of councils, companies and public utilities across Britain.
The documentary takes viewers behind the curtain at the National Cyber Security Centre, Britain’s digital front line. Inside the NCSC’s threat response room, the backdrop is one of ceaseless vigilance. Analysts comb through data, link indicators of compromise, and chase malicious IP trails across continents. It’s a glimpse into the reality: ransomware is a 24-hour industry. The UK now sees at least one confirmed ransomware attack per day.
Those are just the ones we hear about.
Panorama focuses on a case that hardly made headlines – the quiet collapse of KNP Logistics. A 158-year-old transport firm in Northamptonshire, it was crippled by ransomware in late 2023. It started with a password. It ended with 700 jobs lost, a shuttered fleet and a company left with no operational control. The attackers didn’t need to break in. They walked through the front door.
The ransom? Between £3 million and £5 million, depending on who you ask. The company never recovered.
Panorama doesn’t sensationalise. It doesn’t need to. The real-world footage is powerful because it mirrors what so many CIOs see every day: users clicking phishing links, flat MFA coverage, ageing systems wrapped in modern branding, and a boardroom that still thinks security is IT’s problem.
The episode turns its lens to South Staffordshire Water, where attackers demanded a ransom under threat of tampering with supply infrastructure. The utility refused to pay. The incident prompted an overhaul of its cyber controls, but the story might have ended very differently.
What stood out most from the episode wasn’t the NCSC’s posture or the NCA’s readiness. It was the disconnect.
Despite these daily incidents, most of the UK’s public and private boards still don’t treat cybersecurity as a strategic priority. For many, it remains a compliance box, something that gets mentioned after the finance slides or buried in risk registers with generic language like "data breach" or "IT outage".
The numbers alone are frightening. The average ransom demand for a mid-sized UK organisation is now estimated at £4 million. And that’s before calculating downtime, data loss, remediation costs, reputational damage and legal exposure.
And yet – we still see councils running decade-old on-prem servers. We still see default admin accounts, expired SSL certificates, flat Active Directory forests, and backup systems that haven’t been tested in real-world failover mode since the day they were installed.
Let me be blunt. Too many executives are betting their business on hope. Hope that it won’t be them. Hope that insurance will cover it. Hope that someone in IT has already sorted it.
Hope is not a strategy. It never was.
As Head of Technical Operations and Cyber, I’ve had these conversations at every level. The CFO who asks whether we “really need MFA for everyone.” The project sponsor who “needs that exception just this once.” The line manager who thinks cyber awareness training is optional. The legacy supplier who tells us, flat out, that they don’t support secure API integration.
Every one of these moments is a crack in the wall. A way in.
Panorama reminds us that attackers don’t need to invent new exploits. They just need to find the people and processes that gave up defending the old ones.
And that’s the real story here. We’re not failing because the threat is evolving too quickly. We’re failing because we haven’t done the basics. And because we’re still treating cybersecurity as a cost centre instead of a resilience function.
The solution is painfully clear, but rarely easy to implement: enforce the fundamentals. Patch aggressively. Remove legacy systems. Insist on MFA, even when it’s inconvenient. Run red team exercises. Encrypt everything. Validate your backups. Drill your incident response like it’s a fire evacuation.
And most of all – educate your people.
The most powerful firewall in the world won’t stop someone from wiring £80,000 to a fraudster if they believe the CEO sent the email.
Boards need to get this. Not in theory. Not in bullet points. In blood, sweat and budget.
Panorama did an excellent job of showing what happens when that doesn’t occur. But the episode should be shown in every council, every NHS trust, every mid-sized manufacturer with an exposed RDP port and an old insurance policy.
The biggest risk to British organisations right now isn’t China, Russia or some faceless hacking syndicate. It’s the belief that we are too small to matter, or too old to be vulnerable.
You’re not.
They’re coming for everyone.
What’s your take? Have we normalised cyber incidents as the cost of doing digital? Or is there still time to change the culture before the next wave hits?
Let’s share the good, the bad and the messy middle. Who’s genuinely ready – and who’s still hoping it won’t be them?
About the Author
Ben Meyer is a senior IT and cybersecurity leader with a decade of experience spanning scale‑ups, global enterprises and the public sector (contract). He specialises in modernising IT operations, implementing cloud platforms, and embedding security and compliance frameworks aligned with ISO 27001 and GDPR. As the founder of Meyer IT Limited, Ben partners with startups, charities and high-growth organisations to deliver pragmatic, people‑centred technology leadership.
2025-07-26
Nearly 200 University of Hull accounts were blocked after a phishing campaign targeted students and staff with scam emails demanding money.
Image credit: Created for TheCIO.uk by ChatGPT
The University of Hull has confirmed that nearly 200 user accounts were compromised in a phishing email campaign earlier this week, prompting a swift internal response and temporary service disruption for staff and students.
The breach, which was first detected on Wednesday 23 July, saw attackers successfully exploit email accounts across the university’s internal systems. According to the university’s official statement, 196 users were affected by the scam, which involved malicious messages designed to appear as legitimate communications. Once the attackers had access, they used those accounts to send further fraudulent messages demanding money.
Hull’s IT security team worked with its third-party cybersecurity provider to contain the incident. Affected accounts were blocked quickly, cutting off the ability of the attackers to spread their phishing campaign any further. However, the swift action also meant that dozens of staff and students lost access to essential services such as Microsoft Teams and email while the accounts were being assessed and restored.
In a statement issued via the university’s website, officials reassured the wider campus community that the breach had been contained and that no widespread system failure had occurred. They emphasised that the university remained operational and that student and staff support teams were now working one-on-one to restore access and ensure that victims of the scam were supported. Those unable to log into their usual services were advised to present identification at the university’s IT help points in person.
The BBC reports that the attack appears to have been financially motivated, with scammers seeking direct payments through fake correspondence. University officials have not disclosed whether any money was actually transferred or whether police have become involved in the investigation. The attack is being treated as an isolated incident but sits within a broader context of growing cyberattacks targeting UK universities.
Institutions in the higher education sector continue to find themselves in the crosshairs of cybercriminals. Universities manage sprawling networks of user accounts, often with inconsistent security postures across departments. Students, in particular, can be susceptible to social engineering attacks due to their frequent transitions between systems and high levels of trust in institutional communications.
The incident at Hull follows a familiar pattern. Attackers typically send a small number of highly targeted emails that appear to come from university authorities, IT departments or financial offices. Once a single user clicks a link or replies to a message, the attacker gains a foothold inside the institution’s ecosystem. From there, access can be used to harvest data, move laterally through systems or send further phishing emails from within the network to boost credibility.
What differentiates the Hull case is the speed with which the university detected the breach and moved to isolate affected accounts. In contrast to several recent attacks across the UK higher education sector, the spread appears to have been curtailed before systemic harm could take place. Still, the fact that nearly 200 users were compromised before the breach was contained raises questions about how the initial emails bypassed existing security controls.
Universities have increasingly adopted multi-factor authentication, anti-phishing training and behaviour-based detection systems, but attackers have become more sophisticated in their tactics. In some cases, fake messages now include institution-specific language and signatures, making them harder to distinguish from legitimate communication.
A spokesperson for the university confirmed that wellbeing support was being offered to affected users. Students were directed to the Hubble help centre on campus, while staff were offered support through internal health and wellbeing resources. The university also provided a dedicated phone number for IT assistance and pledged to follow up directly with those whose access had been blocked.
This breach is unlikely to be the last of its kind. As universities expand their reliance on cloud-based services, third-party platforms and hybrid working environments, their attack surfaces will only grow. Cybersecurity experts continue to warn that without consistent investment in user education, threat intelligence sharing and incident response planning, the sector remains exposed.
For the University of Hull, the event serves as both a warning and a vindication. The warning lies in the sheer speed and reach of a targeted phishing campaign, able to penetrate nearly 200 accounts in one day. The vindication comes in the form of containment and response, which, according to available evidence, was fast enough to prevent broader damage.
No information has yet been released regarding the origin of the phishing campaign or whether law enforcement agencies have been asked to assist. The university said it would provide updates to staff and students directly via alternative channels while account access is gradually restored.
As of the time of writing, full service for the majority of users had yet to be reinstated. For those impacted, the disruption offers a stark reminder of how rapidly trust can be eroded when institutions become the targets of well-timed digital attacks.
What’s your take? Should UK universities be required to publish details of every phishing attempt that leads to account compromise?
Let’s share the good, the bad and the messy middle. Has your institution faced something similar? What worked, what failed, and what would you do differently next time?
Disclaimer: This article is provided for general information only and does not constitute legal advice. The author is not a legal professional. Organisations should seek independent legal counsel before relying on, or acting upon, any of the points discussed.
2025-07-26
The UK government will prohibit public sector organisations and critical infrastructure operators from paying ransomware demands. The policy aims to weaken the cybercriminal business model and improve national cyber resilience. But for it to work, reporting, funding and public sector readiness must evolve in parallel.
Image credit: Created for TheCIO.uk by ChatGPT
The UK Government has announced a major new measure to counter the growing ransomware threat: a ban on public sector bodies and critical infrastructure operators paying cyber ransoms. The aim is to disrupt the economic model behind these attacks and shift national cyber strategy from reactive recovery to active deterrence.
The announcement confirms that public sector organisations including NHS trusts, local authorities, schools, and operators of critical infrastructure will no longer be allowed to pay ransoms under any circumstances. For private organisations, the policy introduces a mandatory pre-notification requirement before any payment is made.
Security Minister Dan Jarvis described the change as part of a wider effort to “smash the cyber criminal business model”. The move is being widely interpreted as a turning point in UK cyber policy, and a challenge to organisational leaders to upgrade resilience.
The UK’s public services have suffered high-profile ransomware incidents over the past decade. The 2017 WannaCry attack severely disrupted NHS systems. More recently, ransomware-linked disruption has been reported in hospital pathology, library services, and across the private sector, including at major retailers such as M&S and Co-op.
Public support for tougher action has grown. A consultation held earlier this year found that nearly three-quarters of respondents supported banning public bodies from paying ransom demands. That public backing has given ministers cover for a strong stance.
The government’s messaging focuses on resilience, sovereignty and justice. But turning that ambition into operational reality will take more than legislation.
The proposals sit within the government’s broader cyber strategy. The Cyber Resilience Bill, expected later this year, will give enforcement agencies the power to fine organisations that fail to patch vulnerabilities or that neglect risk assessments.
Ransomware is not just a technical threat. It is an economic one. Cybercriminal groups often target public services precisely because they know that the stakes are high and that organisations are likely to pay to resume operations quickly.
The UK Government is trying to do what other governments, including the United States? have hesitated to do: remove the financial incentive. If attackers believe they are unlikely to get paid, they may move to less impactful strategies.
But this only works if the system behind public services can withstand the impact of an attack. That means recovery, not ransom, must become the standard response.
If public bodies can no longer pay, there is no negotiation. That increases the risk for attackers and reduces their likelihood of success. Over time, the hope is that this discourages targeting of public systems altogether.
The policy mandates better backups, offline recovery systems, and tested incident plans. This could strengthen operational resilience in areas that have historically under-invested in cybersecurity.
Forcing private sector organisations to notify before making payments ensures that intelligence is captured, patterns are recognised and regulators can intervene where necessary — particularly where sanctioned actors may be involved.
While few countries have formal bans, many are now discouraging ransomware payments and increasing enforcement against criminal networks. The UK’s move positions it as a leader in this space.
If systems go down and lives are at risk, as can happen in healthcare or emergency services, leaders may feel forced to pay despite the law. That puts frontline staff in an impossible position.
Small councils, academies, and NHS trusts may lack the funding, skills or capacity to rebuild systems without external help. If funding and support do not accompany the ban, the risk of prolonged disruption rises.
If encryption-based attacks no longer work, attackers may shift to stealing data and threatening to publish it. This avoids the need for system disruption and still creates leverage, particularly in politically sensitive or high-trust environments.
If pre-payment reporting is too complex or legally risky, private firms may bypass it entirely or turn to unregulated intermediaries. Clear, fast, confidential routes are essential.
This is not just a policy issue. It is an operational one. Leaders in the public and regulated private sectors should now assume that:
Steps to take now:
This policy will not eliminate ransomware. But it does provide a basis for a more mature response, one that refuses to treat criminal threats as a service disruption cost.
Ultimately, this is a bet. A bet that by removing the ransom option, the UK can both reduce attacks and push the public sector into a more resilient posture.
That bet will only pay off if organisations are supported. If contingency plans are tested. If sector-specific recovery frameworks exist. And if the burden of compliance is matched with practical help.
Otherwise, we risk a policy that is principled, but painful.
Banning ransomware payments is a bold move. It will frustrate attackers. It may frustrate some in the public sector too.
But it sets a direction: one where public data, public services, and public trust are not negotiable.
In the years ahead, we will look back at this moment as the point the UK said enough.
Let us make sure the system is ready to follow through.
2025-07-25
A new partnership between OpenAI and the UK Government marks a major moment in the role of AI in the public sector. But as the Memorandum of Understanding moves from statement to strategy, the focus must shift to capability, safeguards and long-term public value.
Image credit: Created for TheCIO.uk by ChatGPT
The announcement that the UK Government has signed a Memorandum of Understanding with OpenAI is more than just another story about artificial intelligence. It signals something bigger: a deliberate shift in how the state approaches AI adoption, infrastructure, and delivery at scale.
The Memorandum suggests collaboration across key areas including national infrastructure, service delivery, security research, and skills. It mentions the possibility of shared data environments. It commits to safeguards. It outlines an intention to invest in AI capabilities in the UK, including through the expansion of OpenAI’s presence.
This is a moment of strategic alignment between government and one of the world’s most influential AI companies.
But the benefits will only be realised if this agreement becomes a blueprint for capability and service transformation, not just a brand alliance or a procurement channel.
Though the MoU is not legally binding, it does set out a number of shared goals between the Department for Science, Innovation and Technology (DSIT) and OpenAI:
The document reflects a growing recognition that government cannot sit on the sidelines as AI evolves. But it also carries risks, especially where the public interest and private incentives diverge.
So far, the official messaging has focused on the promise: productivity, innovation, job creation and research acceleration.
That is all possible. But none of it is automatic.
The UK lacks dedicated public infrastructure for AI. Existing compute environments, training resources and secure sandboxes are limited. If this agreement accelerates investment in UK-based data centres, research partnerships and secure experimentation zones, it could move the UK from theory to practice much faster.
This would also reduce dependence on foreign compute assets, an important consideration for digital sovereignty and long-term resilience.
AI can help improve service delivery if deployed with care. For instance:
The agreement positions AI as a delivery asset, not just a policy topic, and that matters.
OpenAI’s expansion in London is welcome, but more important is what comes with it... data scientists, engineers, legal experts and infrastructure architects who can engage with government, academia and regulators.
There is potential here to seed a new generation of public AI talent, particularly if secondments, shared projects or co-designed tools are on the table.
The phrase “information sharing” in the MoU is doing a lot of work. It could mean aggregated, non-sensitive insights. It could also mean direct access to some of the most sensitive public datasets in the country.
That includes health records, benefits data, education results and criminal justice documentation. These datasets are powerful, and valuable.
If shared without clear legal and ethical guardrails, they risk being used to train commercial models without public consent or accountability.
Transparency is not a nice-to-have. It must be foundational. That includes data protection assessments, external review and a right for the public to understand and challenge use.
OpenAI is not a public utility. It is a commercial actor, with private investors, global priorities and a competitive roadmap.
This agreement must not become a de facto procurement pipeline. It should be a mechanism for joint work on standards, tooling and experimentation; and not a commitment to embed a single vendor across the state.
Public sector technology should be plural, open and accountable. Any deployment of OpenAI models must be justified against those values, not simply assumed based on the MoU.
If departments use AI to bolt automation onto outdated workflows, the result will be more confusion, not less. Faster decisions, but not necessarily fairer ones. Personalised content that reinforces structural inequalities.
The real opportunity lies in rethinking services around AI, not using it to paper over structural cracks.
This is not a passive moment. Leaders across digital, data, operations and policy have a short window to ensure this agreement delivers value, and avoids becoming a missed opportunity.
Set clear expectations for AI in your service area. What are the outcomes? What should not be automated? What role does human judgment play? Get ahead of vendor pitches with your own public value tests.
This deal should not lead to external dependency. Build in-house teams who can evaluate models, test prompts, design safeguards and write clear service documentation.
If AI is embedded into a service, the user must know. There must be clear ways to opt out, challenge decisions, and speak to a person. Explainability is not theoretical, it must be operational.
Make this public. Pilot carefully. Publish results. Share learnings across departments. A single team cannot deliver safe, inclusive AI alone. It has to be a community of best practice.
This agreement could be a turning point. It could show how the UK can build services that are faster, fairer and more personal. It could place the UK at the forefront of safe, democratic AI development.
But only if we treat this not as an endpoint, but a starting point. Not as a transaction, but a long-term process. Not as a shortcut, but a structured test of capability.
This is not a partnership between equals. It is a partnership between public interest and private capability. To keep that balance right, the public sector must lead with confidence, clarity and care.
We now have the signal. The delivery comes next.
Disclaimer: This article is provided for general information only and does not constitute legal advice. The author is not a legal professional. Organisations should seek independent legal counsel before relying on, or acting upon, any of the points discussed.
2025-07-24
Why cybercriminals target charities, and how small organisations can reduce risk without breaking the bank.
Image credit: Created for TheCIO.uk by ChatGPT
In the cybercrime ecosystem, attackers don’t just chase value, they chase vulnerability.
Banks and fintechs are fortified, monitored, and resilient. Charities? Often not. And that makes them attractive for a different reason: they’re seen as easy wins.
Charities are small, underfunded, and reliant on trust. They work with sensitive data but lack technical defences. Many operate with thin IT support and aging infrastructure. In the eyes of cybercriminals, that’s the perfect recipe.
Most charities don’t have:
People open emails from charities. They click links. They want to help. That trust makes phishing and impersonation attacks far more effective.
Charities collect:
This is the kind of data attackers can sell or exploit.
Volunteers often use personal devices. Cyber hygiene varies. There's rarely formal onboarding, MFA enforcement, or remote device management.
The cost of a breach can go far beyond the financial:
In 2023, 24% of UK charities reported a cyber breach or attack. The larger the charity, the more likely it was hit.
Most attacks aren’t advanced, they succeed because the basics are missing. Here’s how charities can become much harder targets, using free or low-cost measures.
Cost: Free
Cybersecurity is everyone’s responsibility.
Start with one clear message per month. Keep it practical and human.
Cost: Free
MFA is one of the most effective defences available.
Enable it on:
How-to links:
👉 Enable MFA in Microsoft 365
👉 Enable MFA in Google Workspace
Cost: Free
Unpatched software is a top attack vector.
Learn more:
👉 Mitigating Malware – NCSC
Cost: Low
Guide:
👉 NCSC backup checklist
Cost: Free
Too much access = too much risk.
Cost: Free–Low
Recommended:
Cost: Free
You don’t need reams of documentation. Focus on:
Templates available via NCSC:
👉 Policy templates for charities
Cost: ~£300+
Cyber Essentials is a UK government-backed scheme that helps small orgs:
Learn more:
👉 Cyber Essentials
Some regions offer funding support—check with your local authority or grant body.
Cybercriminals aren’t just targeting banks, they’re looking for soft spots. And right now, too many charities fit that profile.
But cybersecurity doesn’t need to be expensive or complex. With free resources and a bit of focus, you can dramatically reduce your risk, and protect the data, donors, and communities that rely on you.
You don’t need to be perfect. Just harder to breach than yesterday.
2025-07-21
"KNP Logistics, one of the UK’s oldest haulage firms, collapsed after hackers exploited a single weak password and missing MFA. The incident is a stark reminder for IT leaders and business owners: basic cyber hygiene is still the frontline defence."
Image credit: Created for TheCIO.uk by ChatGPT
Sometimes cybersecurity fails aren’t about cutting-edge malware or zero-day exploits. They’re the result of old-school mistakes, like a single weak password, with catastrophic consequences. That’s exactly what happened to KNP Logistics, a UK haulage firm founded in 1865.
Last year, the Akira ransomware gang, believed to operate from Russia, broke into KNP by brute-forcing a guessable password. With multi-factor authentication disabled, they walked in. Once inside, they:
Within weeks, the firm entered administration. The result? 730 people lost jobs, a fleet of 350 trucks was grounded, and 158 years of business history vanished.
Here’s the brutal truth: ransomware gangs target companies like yours. Not because you’re rich, but because your defences are porous. And often, that porosity comes from the simplest vulnerabilities:
Even if your business day-to-day runs smoothly, events like this rarely come out of nowhere. They're the result of layered missteps, ignored basics that become fatal when stitched together.
If you’re a business leader or senior IT decision maker, here’s your moment. Put these on the table with your IT and security teams:
If you don’t have firm answers, it’s time to act.
All the tech in the world can’t fix human error. In KNP’s case, one reused password unraveled everything. Security culture isn’t about fear, it’s about habits and accountability:
These are small asks compared to losing millions—or your whole business.
Not all security improvements require big budgets. KNP could have been saved by enforcing existing tools: passwords and MFA. That’s discipline, not £s.
But it’s worth it. Because a few seconds of inconvenience is tiny compared to losing centuries of trust, staff livelihoods, and company valuation.
As Paul Abbott, a director at KNP, put it:
“What brought us down wasn’t a sophisticated hack, it was a simple human failing.”
If your next chat with IT buzzes with talk of “basic security stuff,” don’t tune it out. That’s not check-box noise. That’s your front door. Make sure it’s locked.
2025-07-20
Attackers are combining Microsoft Teams calls with Quick Assist to deploy malware and ransomware inside two hours. Here’s what every IT leader needs to know, and act on.
Image credit: Created for TheCIO.uk by ChatGPT
Attackers are calling staff directly via Microsoft Teams, posing as internal IT support. Once the conversation starts, they guide the target to open Quick Assist, Microsoft’s built-in remote support tool.
It sounds routine, a helping hand during a tricky moment. But in reality, it’s the start of a full compromise. Within the same session, attackers are launching PowerShell, dropping malware like Matanbuchus 3.0, and triggering Cobalt Strike or ransomware like Black Basta.
This isn’t theory. Microsoft, Morphisec and others have seen this playbook evolve rapidly, and copycats are on the rise.
The tactic isn’t new, but it’s been upgraded. Criminal groups now use subscription-based malware loaders, sell access on demand, and rehearse their delivery to slip past endpoint tools.
Quick Assist is signed by Microsoft, which often leads to misplaced trust. The app is genuine, but once an attacker convinces someone to read out a session code, it becomes a tunnel into the estate. Everything from keyboard access to command execution flows through it.
Microsoft Teams plays a key role. Many organisations leave federation open for ease of collaboration. Attackers exploit this by creating tenants named “IT-Support” or similar, then start calls that look and sound plausible, especially when paired with email noise, ticket references or even voice clones.
Morphisec timed one full compromise, from the initial Teams call to ransomware, at one hour and fifty-one minutes.
Targeting
Public profiles, leaked data and supplier lists offer everything needed to craft a convincing call.
Initial contact
The user gets a Teams message or voice call from “IT Support”, usually amid email noise or fake tickets.
Quick Assist session
A six-digit code is exchanged and access is granted. At this point, the attacker has full control.
Payload delivery
PowerShell pulls down a loader like Matanbuchus, which quietly prepares the next stage.
Privilege escalation
Tools like Cobalt Strike disable logs, extract credentials and spread internally.
Ransomware deployment
A ransomware package encrypts systems and exfiltrates data, all before security teams detect a breach.
Correlate Quick Assist and Teams activity
Look for Quick Assist Event ID 41002 within minutes of an external Teams call. This pairing should always raise a flag.
Block outbound scripts during remote sessions
Any PowerShell execution to pastebins or URL shorteners during Quick Assist should be blocked or alerted.
Log all remote control sessions
Whether through video or keystroke capture, this gives vital context and deters insider risk.
Label external users in Teams
Highlighting external contacts disrupts social engineering and gives staff a prompt to pause.
Phase out Quick Assist
Move to Intune Remote Help, which includes RBAC, policy enforcement and session auditing. Microsoft itself now advises this.
Tighten federation controls
Limit Teams federation to a known allow-list. Disable anonymous joiners where possible.
Require call-back verification
No privilege reset or remote session should proceed without confirmation via a trusted number or device.
Run vishing simulations
Include Quick Assist prompts in phishing and vishing drills. Celebrate the people who say “no” and report it.
Invest in recovery, not just defence
Maintain clean, offline backups and rehearse business decision-making. A well-tested recovery limits the damage — and the ransom.
Quick Assist is a useful tool, but in the wrong hands, it becomes the attacker’s way in. The fix doesn’t start with new tech. It starts with policy, clarity and culture. Let’s give people the confidence to pause, verify and push back when something doesn’t feel right.
That’s how we stay ahead of the next “friendly” call.
How are your teams responding to suspicious calls today?
Talking points for senior management
2025-07-19
Large language models can invent facts... a risk that carries legal, compliance and reputational costs. Here’s how leaders can contain the damage.
Image credit: Created for TheCIO.uk by ChatGPT
Large language models (LLMs) now draft emails, write code and summarise contracts in seconds, yet they sometimes invent facts. These errors, known as hallucinations, are already landing in courtrooms and compliance reports. Understanding the stakes is now as important for non‑technical directors as it is for CIOs.
LLMs predict the next word in a sentence, not the truth. That means they can generate:
Research from Stanford’s Institute for Human‑Centered Artificial Intelligence (HAI) found legal‑specialist models hallucinate in roughly one answer in six. The team likens the issue to a sat‑nav that occasionally drops you in the wrong city – still useful, but you must check the road signs.
How it bites | Real‑world cost | Quick defence |
---|---|---|
Staff rely on bogus case law | Tribunal payout and staff distrust | Lawyer review before filing |
Consultant memo cites fake regulation | Negligence claim and fee write‑off | Draft–approve workflow with SME check |
Chatbot gives bad mortgage advice | FCA redress and fine | Guardrails and audit logs |
Vendor API injects wrong data | SLA breach and reputational hit | Indemnity clause plus monitoring |
Insurance may soften the blow, but underwriters now ask for evidence of AI oversight before paying.
UK law firm TLT LLP warns that companies still owe a duty of care when customers rely on AI‑generated content, stressing that inaccurate outputs can breach FCA rules or contract warranties around “reasonable skill and care”. In professional services, misstatements can trigger negligence claims even when an AI drafted the error. High‑profile cases such as Mata v Avianca – where lawyers were sanctioned for filing citations invented by ChatGPT – illustrate the point.
Regulators are clear: businesses cannot hide behind a black box when mistakes harm consumers.
Hallucinations will not disappear soon – the creativity that makes LLMs powerful also makes them prone to fiction. Until verifiable AI arrives, businesses must invest in oversight – or budget for the consequences.
Disclaimer: This article is provided for general information only and does not constitute legal advice. The author is not a legal professional. Organisations should seek independent legal counsel before relying on, or acting upon, any of the points discussed.
2025-07-18
Every new tool sparks fear of job losses, but the reality is always more nuanced. AI won’t replace people; it will reshape how we work. Here’s what leaders need to know.
When the first steam engines rattled to life in the 18th century, the world braced for the end of human labour. The same fear resurfaced when computers entered office blocks in the 1970s and when the internet began stitching the world together a generation later. Each time, prophets of doom declared that machines would put people out of work for good. Each time, they were wrong.
Now, we find ourselves here again, but this time the machines can write emails, draft code and even produce passable poetry. Artificial intelligence has captured boardroom agendas, media headlines and our collective imagination. Once again, the question resurfaces: will AI replace us?
The truth is both simpler and more complex. AI is just another tool. It is remarkable, yes, but it is still a tool. And like every tool before it, it won’t erase our jobs outright. Instead, it will transform them.
For leaders in technology and beyond, understanding this distinction is crucial. Because what matters most now is not whether AI will take our jobs, but how we adapt our roles to make the most of it.
There is a long tradition of mistrusting new tools. The Luddites famously smashed textile machinery because they saw it as a threat to their livelihoods. In the end, mechanisation didn’t kill textile work. It reshaped it, unlocking new industries, markets and skills that no one could have imagined from the clattering looms of Yorkshire.
AI is the loom of our time. It automates tasks we once thought required uniquely human traits: judgement, creativity, intuition. But if you look closely, what AI does is closer to prediction than true understanding. A large language model can draft an article (though I’d wager this one will still read better than ChatGPT’s version). A generative AI tool can spin up marketing copy or summarise meeting notes. These are useful outputs, but they still need context, oversight and, above all, a human to steer the ship.
In that sense, AI is not so different from the spreadsheet or the search engine. We once needed clerks to add up columns of numbers by hand. Spreadsheets didn’t eliminate finance jobs; they made them more strategic. Search engines didn’t get rid of librarians; they gave knowledge workers instant access to information that once took days to uncover.
The key shift is this: AI is best thought of not as a replacement for human workers, but as an augmentation. It gives us leverage. It frees us from repetitive drudgery so we can focus on higher-value tasks.
Take software development. Generative AI can write boilerplate code, suggest bug fixes and even generate test cases. But no CTO worth their salt will fire the entire dev team and hand the keys to a chatbot. Instead, good leaders will ask: what happens when my engineers spend less time debugging and more time designing better products? What new services can we build when mundane tasks take minutes instead of hours?
The same applies to marketing, HR, customer service and countless other functions. AI can draft job descriptions, write first-pass emails and handle routine queries. But people are still needed to define strategy, build relationships and make sense of the results.
It’s true that some roles will disappear. History shows us that when technology removes repetitive tasks, the jobs tied solely to those tasks fade away. Switchboard operators, typists, factory line workers; these roles have dwindled or vanished altogether.
Yet work itself did not shrink. Instead, it shifted to places where human skills, empathy, judgement, creativity, are indispensable. In the process, entirely new jobs emerged. Nobody was hiring social media managers or cloud architects thirty years ago. Entire industries such as digital advertising, app development, e-commerce were built on the backs of technologies that were once viewed as job killers.
AI will be no different. It will render some tasks obsolete. But it will also create demand for new skills: prompt engineers, AI ethicists, data trainers. We will need more people who can bridge technical and human worlds such as people who understand how to ask the right questions, interpret the results and guide AI in ways that align with real business goals.
For business leaders, the biggest danger is not AI itself, but failing to adapt. The organisations that will fall behind are those that treat AI as a gimmick or, worse, an excuse to cut costs without rethinking how work should evolve.
Imagine a customer service centre that uses AI to automate routine queries. Great. But if leadership simply banks the savings and lets the human agents go, they miss the bigger prize. What if those agents could now focus on complex cases that build deeper customer loyalty? What if they could train the AI to handle ever more nuanced scenarios? What if they became customer experience designers rather than call handlers?
The same principle applies at the board level. AI can help draft reports, flag trends in data and surface insights leaders might otherwise miss. But decision-making still needs human context. An AI might tell you sales dropped 12% last quarter. Only you can ask the right follow-up questions: was it seasonal? A supply chain hiccup? A competitor’s new product launch? Tools can present facts, but meaning comes from people.
So, if AI won’t take our jobs but will transform them, what should we focus on?
First, cultivate curiosity. The best people I know are not the ones with the deepest technical knowledge, but the ones who keep asking questions. What can this tool do? What can’t it do? How could it help us work better?
Second, invest in adaptability. The pace of AI development means that what looks state-of-the-art today will feel quaint in five years. Teams that cling to old ways of working will struggle. Teams that embrace experimentation will thrive.
Third, double down on distinctly human strengths. Emotional intelligence, critical thinking, ethical reasoning, these are not easily codified into algorithms. They are also the traits that make organisations resilient in the face of constant change.
Finally, build cross-functional fluency. The most successful AI projects are rarely the sole domain of IT. They succeed when business leaders, technologists and end users collaborate to solve real problems, not just deploy shiny tools.
If you are an IT leader, or any leader for that matter, your job is not to have all the answers. Your job is to ask better questions, set the right guardrails and ensure your people feel empowered to use AI wisely.
Too many organisations rush headlong into AI adoption without clear principles. This creates risk, from biased algorithms to wasted spend on tools nobody uses. Good leadership means putting ethical frameworks in place, asking who benefits and who might be harmed, and being clear about where human oversight sits.
Equally, resist the temptation to hoard control at the top. The best AI use cases often come from the front lines, the sales rep who figures out how to use an AI assistant to cut admin time in half, or the operations manager who spots inefficiencies that a predictive model could help solve. Create space for experimentation. Celebrate small wins. Learn from failures.
One of the myths about AI is that its impact is inevitable, as if algorithms simply wash over us like a tide we can’t control. In reality, how AI changes work depends on the choices we make now.
Governments have a role to play, too. Regulation must keep pace with innovation. Education systems need to help people gain the digital literacy and critical thinking skills that AI-enhanced workplaces demand. But business has a responsibility as well. It is not enough to say, “We will reskill our people” while quietly hoping they’ll manage on their own. Investment in training, clear communication and honest dialogue are essential.
For all the anxiety AI stirs up, it also holds enormous promise. If we get this right, AI can help tackle complex problems faster, from improving patient outcomes in healthcare to driving sustainability in supply chains. It can give small businesses capabilities once reserved for big players. It can level the playing field, free up our time and make work more meaningful.
But it won’t do any of this on its own. It will do it through people, people who know how to use it wisely, ask better questions and put it to work in ways that reflect our values.
So the next time someone says AI will take your job, remind them of this: it is not the tool that shapes the future of work. It is how we choose to use it.
And if history is any guide, we humans have always been very good at turning new tools into new possibilities.
2025-07-16
AI tools are entering businesses faster than most teams can track, often through everyday platforms or individual experimentation. That’s exposing organisations to silent risks: leaked data, hallucinated outputs, and unaudited decisions. Without clear policy or oversight, what starts as convenience can quickly become a governance headache.
It’s everywhere. From automated assistants and smart analytics to synthetic voice, code and content, artificial intelligence is reshaping the way businesses operate. Or at least, it promises to.
But beneath the rush to adopt new tools lies a growing tension. Leaders are asking how to embrace AI’s potential without exposing their organisations to unexpected risks. That tension has moved from the IT team to the boardroom.
So is AI ready for business? And more importantly, is your business ready for AI?
Used well, AI can save time, improve decision-making and reduce operational friction.
Early adopters are seeing value in areas such as customer service (via intelligent chatbots), threat detection (through pattern-recognition models), and internal productivity (with large language models summarising reports or drafting content).
Some organisations are already integrating AI into more strategic domains, including financial forecasting, supply chain optimisation and legal document review.
AI is no longer a lab experiment or tech pilot. It’s showing up in Microsoft 365, Salesforce, HR platforms and customer-facing products.
With any new technology, benefits arrive faster than safeguards. The biggest concern? Visibility. Many companies are unsure how many AI tools are being used across their teams, and by whom.
Security researchers have highlighted examples where employees have pasted sensitive data into free-to-use tools like ChatGPT, with no clear policy on data handling or retention. In some cases, proprietary code or client documents were processed by public models without oversight.
And then there’s the quality problem. Generative AI systems can produce convincing but incorrect content, sometimes called “hallucinations”. If employees rely on that output without human checks, the consequences could range from embarrassing to legally risky.
Data leakage
Who controls what’s shared with AI tools? Are prompts stored? Can outputs be retrieved?
Compliance ambiguity
If an AI system makes a decision, about a loan, a CV, or a medical case... who’s accountable?
Shadow adoption
Staff may use AI tools without approval, bypassing procurement, infosec and legal review.
Third-party risks
AI features are now embedded in software from vendors who may not fully explain how models are trained or secured.
Workforce impact
While automation can free up time, it can also introduce anxiety, over-reliance or confusion about roles.
The point isn’t to scare teams off AI. It’s to put the right checks around it, and ask better questions before diving in.
What data is this AI trained on?
Can I audit its decisions?
What happens to the information I give it?
Could I explain this process to a regulator?
When AI is deployed with structure, it can amplify the best of your business. But without that structure, it can create blind spots that are hard to spot and harder to fix.
Most organisations don’t need to roll out a full AI governance framework overnight. But they do need to know where AI is already in use, where it could add value, and where it might cause problems if left unmanaged. That means focusing on three areas: visibility, policy, and people.
AI adoption rarely starts with a strategy. It often starts with curiosity.
A marketing executive asks ChatGPT to draft a campaign. A developer uses GitHub Copilot to write boilerplate code. A finance analyst tries an AI plugin to summarise invoices.
These aren’t fringe examples. They’re happening across sectors, often with no formal sign-off.
Start with a simple discovery exercise:
This doesn’t need to be a surveillance exercise. It’s about understanding exposure so that you can design controls that support good behaviour — not block productivity.
A five-page acceptable use policy hidden in a shared folder won’t cut it.
Instead, offer clear, accessible guidance that answers everyday questions:
Good policies don’t just list rules, they reduce uncertainty. Include examples, highlight grey areas, and make it clear where accountability sits.
It’s also important to coordinate with legal, data protection, and procurement teams. Make sure contract reviews cover AI features, vendor claims, model updates and data retention.
Once you know where AI is used, introduce basic safeguards:
For high-risk use cases, such as tools that screen CVs, score loan applicants or summarise legal documents, establish a review process and document the checks.
AI risk is rarely about malicious intent. It’s more often about unintended consequences. Controls should make it easier to do the right thing.
Finally, AI governance isn’t just about tech or compliance. It’s about trust.
Staff need to feel confident they can ask questions, raise concerns and explore new tools safely. That means:
The goal is not to shut down AI. It’s to help your people use it wisely, and to know where the boundaries lie.
Yes, there’s hype. But there’s also a genuine opportunity for well-governed, carefully scoped innovation.
AI isn’t just another tool. It’s a change in how decisions are made and knowledge is created.
The organisations that will benefit most aren’t the ones who adopt it first, they’re the ones who ask the right questions before they do.
2025-07-15
A 7.3 Tbps DDoS attack is a reminder that the basics of security are still our biggest blind spots. Here’s what IT leaders and non-technical teams need to learn from the world’s biggest DDoS attack.
In the age of zero trust, AI-driven threat detection and cyber insurance, it’s easy to think the era of crude, brute-force attacks is behind us. But last month’s record-breaking distributed denial-of-service (DDoS) attack is a sharp reminder that some of the oldest threats in our playbook are still among the most potent.
According to Cyber Security News, in May 2025, Cloudflare successfully mitigated an unprecedented DDoS attack peaking at 7.3 terabits per second (Tbps). To put that number in context: that’s more than twice the scale of the infamous 2018 GitHub attack, which held the record at 1.35 Tbps at the time.
These numbers are staggering, but they’re not the most important part of the story. The real lesson for CIOs, CISOs and business leaders alike is that basic infrastructure vulnerabilities, complacency and underinvestment in fundamental resilience still pose some of our biggest risks.
This is a wake-up call, not just for the people who wear a security badge, but for every executive who signs off budgets and roadmaps for how digital services are delivered.
Let’s break this down. Distributed denial-of-service attacks aren’t new. The concept is brutally simple: flood a target’s servers with so much traffic that they become overwhelmed and legitimate users can’t get through. It’s the digital equivalent of tens of thousands of people queuing outside your shop, blocking the doors for genuine customers.
What’s changed isn’t the tactic itself, but the scale and sophistication. Botnets today are built from armies of compromised IoT devices, misconfigured servers and unsecured endpoints around the world. Each individual device might have a trivial amount of bandwidth. But when thousands, or millions, of them are marshalled together, the result is a tidal wave that can knock over the world’s biggest brands.
And this hasnt been the only large scale attack in recent years. Microsoft Microsoft’s own article Unwrapping the 2023 holiday season: A deep dive into Azure’s DDoS attack landscape noted an increase of attacks with their robust security infrastructure automatically mitigated a peak of 3,500 attacks daily!
The tools to launch this kind of chaos aren’t locked away on the dark web anymore. Many are off-the-shelf scripts, available to anyone with a browser, a crypto wallet and a grudge.
This is not a one-off! In its [report](Digital Defense Report 2024), Microsoft said that in the first half of 2024 they mitigated 1.25 million, which represents a 4x increase compared with the previous year.
What’s more concerning is the continuing trend towards larger, shorter, more targeted bursts. Attackers know that short, massive spikes are harder to trace and easier to launch from disposable infrastructure. The record-breaking 7.3 Tbps blast lasted just minutes, but that’s enough to take down services that aren’t properly defended.
For businesses, the consequences can be severe: downtime, lost revenue, damaged customer trust and, in some regulated sectors, significant penalties.
Too many leaders still treat DDoS as an IT-only concern. It’s not. The ripple effect of even a short outage can hit supply chains, customer service, brand reputation and share prices. When GitHub was hit in 2018, it survived because it had invested heavily in upstream mitigation and a robust incident response plan. Not every organisation is so prepared.
Ask yourself: if your main web portal, customer login or payments gateway went down for an hour on Black Friday, what would the cost be? And would you get it back? Most boards have rough figures for the cost of a data breach or ransomware demand. Very few track the true business cost of unplanned downtime in the middle of their busiest season.
If we know the threat so well, why does it keep working? The answers aren’t complicated, they’re painfully familiar.
1. Weak Basic Hygiene
Far too many businesses still run poorly configured servers that can be used as open relays for reflection attacks. IoT devices ship with default passwords that are never changed. Public-facing APIs expose unnecessary endpoints. The basics matter, and they’re too often overlooked.
2. No Layered Defence
Some organisations still believe a single vendor or firewall will save them. Real resilience comes from layers: upstream DDoS scrubbing, geo-fencing, intelligent traffic shaping and the ability to spin up extra capacity in the heat of an attack.
3. Complacency About Scale
Many organisations test for “typical” spikes, the kind that come during a big product launch or seasonal sale. But they rarely test what happens if they get hit with an attack an order of magnitude bigger than their largest peak. That’s exactly what Microsoft’s data shows: attackers are scaling up faster than defenders plan for.
So, what should an IT leader, or any business leader, take away from this? Let’s look at what the most resilient organisations have in common.
1. They Know Their Attack Surface
They keep an up-to-date map of every public-facing asset: websites, APIs, partner integrations, third-party services. They understand where they’re exposed and where there are weak spots.
2. They Run Live Drills
It’s one thing to have a DDoS mitigation contract. It’s another to know how it works under stress. The best teams run war games: they simulate massive floods of traffic and practice switching over to backup servers or alternative routing in real time.
3. They Budget for Resilience
Too many businesses treat DDoS protection as a ‘nice to have’. The smart ones know it’s cheaper than recovering from hours of downtime. They budget for upstream mitigation through providers like Cloudflare, Akamai or Microsoft’s own Azure DDoS Protection, and they test it regularly.
4. They Talk to the Business
This is key. Security is not an IT silo. The best IT and security leaders I know talk in terms the board understands: risk to revenue, customer trust, compliance and reputation. When security is a business conversation, it gets funded properly.
There’s another layer here that many ignore: supply chain risk. The biggest DDoS botnets don’t grow in isolation. They thrive because countless companies leave digital doors wide open.
A misconfigured server in one small business can become part of the botnet that brings down your global website tomorrow. And you might not even know it’s your supplier until it’s too late.
This is why supply chain security is becoming a board-level issue. Regulators are paying attention, too. In the EU, the NIS2 Directive expands obligations for supply chain security and incident reporting. Similar moves are afoot in the UK and US.
The conversation about DDoS shouldn’t stop at mitigation. The strongest organisations look at how quickly they can recover. That means designing for redundancy, distributing workloads across multiple providers and building graceful degradation — so critical services keep running even if parts of the system go dark.
Think of the difference between a single web server running your main customer portal versus a global content delivery network (CDN) with built-in failover. When GitHub survived its record 2018 attack, it did so because it used Akamai’s Prolexic service, a vast distributed scrubbing network that absorbed malicious traffic upstream before it hit GitHub’s servers.
That model still works. In fact, it’s more relevant than ever as DDoS tactics evolve.
If you’re reading this and you’re not the person configuring firewalls day to day, you still have a crucial role to play. Good security starts with good questions.
Ask your IT and security teams:
You don’t need to know how to write the code. You do need to know whether the basics are in place.
According to recent research, the average cost of downtime has inched as high as $9,000 per minute for large organisations. For higher-risk enterprises like finance and healthcare, downtime can eclipse $5 million an hour in certain scenarios, and that’s not including any potential fines or penalties.
Ultimately, DDoS attacks are not about stealing data, they’re about trust. If customers can’t access your service, they don’t care whether it was a hostile state actor, a bored teenager or a professional extortion racket. They care that you weren’t ready.
And they may not come back.
The 7.3 Tbps attack won’t be the last record breaker. If anything, it’s a milestone we’ll look back on as just the start of a new arms race in volumetric attacks. As bandwidth grows, so does the scale of potential disruption.
But that doesn’t mean we’re powerless. The fundamentals remain the same: know your environment, plan for the worst, test regularly and embed resilience as a business priority, not an afterthought.
Security stories can feel overwhelming. But remember: it’s rarely the shiny new threat that gets us, it’s our neglect of the basics.
A record-breaking DDoS attack might grab the headlines. But the real question is whether it changes our habits. For leaders, now is the moment to make sure that when the next wave hits, and it will, you’re ready, resilient and able to keep the lights on when your customers need you most.
2025-07-14
54% of employees admit to reusing work passwords, exposing organisations to preventable credential attacks. Here’s what IT and business leaders should be doing instead.
Despite years of cyber awareness campaigns, new data from Bitwarden’s World Password Day Survey 2025 shows that 54 % of employees still reuse passwords across multiple work systems.
It’s a number that should prompt pause, especially at a time when credential-based attacks remain one of the most common breach vectors across cloud, SaaS and hybrid infrastructure.
The logic behind reuse is often innocent: convenience, habit, or a lack of clear guidance. But to an attacker, it’s an open invitation.
Stolen passwords from third-party breaches are readily available online, and cybercriminals use automated tools to plug them into email platforms, VPNs, collaboration tools and admin consoles. It’s called credential stuffing, and it doesn’t require any hacking skill at all.
“Reusing a password is like re-using the same key for every lock and having that key be something that you give out to everyone you meet.”
Joe Siegrist, CEO of LastPass (Inc. Magazine)
Even in large, well‑resourced organisations, password reuse persists for several reasons:
In many firms, employees still reset passwords quarterly, without tools to track reuse.
The result? Shortcuts.
Good password hygiene is a shared responsibility, and it begins with smart defaults, not strict rules.
Here are four moves that every CIO, CTO or COO can prioritise:
Make a secure password manager available to everyone.
Modern enterprise tools provide vaults, autofill, alerts and admin oversight, making unique credentials easier to manage, not harder.
Multi-factor authentication remains one of the strongest defences against stolen credentials.
Use app-based or hardware methods by default; phase out SMS or email-based MFA where possible.
Disable POP3, IMAP and basic authentication.
Move to federated login or single sign-on where possible, and ensure OAuth is the default for new SaaS tools.
It’s not about entropy scores or symbol count.
Focus messaging on impact... what can happen when one password unlocks too much. Link stories to real breaches, phishing campaigns and what they cost the business.
Organisations are starting to see results from shifting their posture away from password punishment.
“We moved from 90-day resets and complexity rules to vaults, MFA, and supportive guidance,” said one FTSE250 cyber lead.
“Helpdesk resets dropped. Credential stuffing alerts went down. Most importantly, our staff stopped gaming the system.”
Metric | Why it matters |
---|---|
Vault adoption rate | Are employees actually using the password manager you provide? |
Reuse alerts | Does your vault or IDP detect password overlap across services? |
MFA coverage | What percentage of user accounts — especially admins — are protected by strong MFA? |
Credential-stuffing attempts | Monitor what your IDP, firewall or SSO tool is blocking daily. |
Passwords may not be the most exciting item on a CIO or COO’s to-do list, but they remain a high-value target for attackers because they’re easy to exploit and often poorly managed.
While no single tool will eliminate credential-based risk, a shift to vault + MFA + clarity can transform your security posture in just a few months.
In short? One reused password shouldn’t bring down an entire enterprise.
📊 Source: Bitwarden World Password Day Survey 2025 (May 2025)
🗝️ Quotation: Joe Siegrist, CEO of LastPass via Inc. Magazine
📝 Written for thecio.uk – July 2025
2025-07-13
Researchers showed it took 30 minutes to pivot from a guessed login to applicant names, email addresses and full chatbot transcripts. The episode exposes how a single forgotten test account can turn into a data-protection calamity, and why default passwords have no place in modern systems.
Image credit: Created for TheCIO.uk by ChatGPT
In one of the more frustrating examples of preventable exposure, McDonald’s AI recruitment platform, McHire, was found to be exposing millions of job application records through a test admin account using the password 123456.
Researchers Ian Carroll and Sam Curry spotted the flaw at the end of June while looking into McHire’s backend. The system, developed and run by Paradox.ai, had a publicly accessible login panel for franchise HR users. The test credentials, username: 123456, password: 123456
, opened the door.
Inside, they found an admin interface linked to a long-defunct test "restaurant" environment. From there, a basic API call using incrementing lead_id
values allowed them to pull the personal data and full application transcripts of other users.
The total scope? Over 64 million job applications, covering years of applicant conversations with McHire’s chatbot, “Olivia”.
Paradox has since confirmed the exposed data included:
No CVs or national insurance numbers were leaked, but that doesn’t diminish the risk. As Carroll put it: “This data is more than enough to socially engineer job seekers or run targeted scams that look completely legitimate.”
Paradox disabled the test account the same day they were notified (30 June), and the IDOR flaw was patched immediately. No malicious access is currently suspected beyond the researchers’ activity.
But the root issue, a default credential left active in a production-connected environment, is far more telling.
It’s easy to scoff at a password like 123456
, but according to NCSC, it’s still one of the top 10 most common in real-world breach datasets. And while most orgs wouldn’t dream of using it for core systems, test environments and sandbox tenants often slip through the net.
In this case, the environment was created in 2019 and seemingly forgotten. But its credentials were still valid, had admin-level privileges, and had direct API access to real-world user data.
The flaw wasn’t just the weak password, it was the absence of basic hygiene:
It’s not just the volume of data that’s worrying, it’s the context.
Applicants trusted they were speaking to a bot inside a controlled process. That means transcripts contain sensitive disclosures, availability, previous roles, even vulnerabilities like health conditions or relocation challenges.
An attacker wouldn’t need to scrape all 64 million. A few hundred high-fidelity records would be enough to build convincing phishing kits, employment scams, or identity-theft campaigns targeting those actively seeking work.
The average jobseeker is more likely to respond to an email that seems to come from McDonald’s recruitment. This breach gave attackers everything they’d need to impersonate that channel convincingly.
This isn’t a story about McDonald’s being a soft target. It’s a story about the risks that linger in the corners of any scaled digital estate, especially in supplier-hosted platforms.
Make test accounts time-limited and auto-expiring. Tag them in your IAM platform and treat them as high risk until removed.
Enforce deny lists and block credential patterns known from breach corpuses. If your password policy allows 123456
, the policy is broken.
Just because it's SaaS doesn’t mean it's safe. If your brand is on the front end, you own the risk, and the reputational blowback.
Test environments shouldn’t mean test-grade security. Same authentication standards, same visibility, same response playbooks.
The best outcome here was that ethical researchers found it first. Your org should know exactly how to respond, investigate and remediate, fast.
Paradox.ai has now launched a public bug bounty programme. McDonald’s says it's reviewing controls and supplier access. No regulators have announced formal investigations (yet), but in privacy terms this is a breach in all but name, and would almost certainly be reportable under UK GDPR or California’s CCPA if replicated in those markets.
If there’s one positive here, it’s visibility. Few incidents spell out the consequences of default passwords and abandoned access quite so clearly.
As Carroll summed it up:
“It was a literal 30-minute journey from the world’s most obvious login to 64 million records. No tricks. Just a forgotten door, left open.”
2025-07-11
True cyber resilience goes beyond technical controls or annual awareness campaigns. It’s about building a culture where everyone feels a personal stake in security. Here’s why ownership matters, and how IT leaders can help every team member shift from “they” to “we”.
If you read my previous piece—Cyber Starts with Culture: Why Technical Controls Aren’t Enough, you’ll know I believe technology alone can’t solve cyber risk. Controls matter, but it’s people and their behaviours that make the biggest difference.
Cyber incidents rarely come from sophisticated nation-state attacks. More often, they start with everyday things: a click on a dodgy link, a process shortcut, or too much trust given to a supplier. When you look closely, the real weakness isn’t technology—it’s people believing cyber is someone else’s problem.
In many organisations, cyber security is still seen as the IT department’s job. You’ll often hear, “They’ll deal with it,” or “That’s not my area.” But the reality is, this thinking leaves gaps everywhere—gaps that attackers are only too happy to exploit.
The best organisations break out of this mindset. They encourage every employee, from apprentice to board member, to see security as something they own. The cultural shift from “they” to “we” is a subtle one, but it’s at the heart of genuine resilience. It’s not just about protecting the company; it’s about protecting colleagues, clients, and your own reputation.
In organisations where a cyber-first culture is thriving, you notice a few things straight away:
It’s not about being perfect. It’s about being open, honest, and willing to improve together.
Changing culture isn’t easy. Most people want to do the right thing, but a few classic obstacles get in the way:
Recognising these issues is half the battle. Overcoming them is about making ownership easy, safe, and rewarding.
Here’s what I’ve seen work in real organisations:
At one organisation I worked with, security was seen as someone else’s job until a close call with an email scam. Instead of locking everything down and blaming the user, the company used the incident as a case study in a town hall session. Staff who reported the scam were praised, lessons were shared openly, and the leadership team took questions directly. The result? A noticeable jump in both incident reporting and collaboration between teams—and a sense that everyone had a role to play.
Ownership only works if leaders are ready to share it. If the board treats cyber as a tick-box or a budget line, the rest of the organisation will do the same. But when leaders regularly ask about risk, join simulations, and praise those who speak up, ownership starts to feel normal.
The NCSC and FCA both make it clear: cyber resilience isn’t just a technical matter; it’s a leadership responsibility. It has to run right through the organisation, from top to bottom.
You can’t manage what you can’t measure. Look at engagement in training sessions, the number and quality of reported near-misses, and the openness of conversations around risk in team meetings. Use staff feedback to spot blind spots and improve your approach.
Regular pulse surveys, open forums, and post-incident reviews are all great ways to keep your finger on the pulse—and to show staff that their input genuinely shapes future decisions.
When you get culture right, cyber stops being just a risk—it becomes a business enabler. It can help win client trust, support digital transformation, and demonstrate to regulators and partners that you take your responsibilities seriously.
A culture of ownership also unlocks faster, more flexible ways of working. Teams who feel trusted and involved are more likely to speak up, collaborate, and embrace new tech securely.
Moving from awareness to ownership isn’t about rolling out another tool or policy. It’s about creating an environment where everyone feels trusted, responsible, and safe to speak up.
If you want genuine cyber resilience, invest in your culture. Make ownership everyone’s business, and you’ll find your strongest defence is your own team.
For more on this theme, see: Cyber Starts with Culture: Why Technical Controls Aren’t Enough.
2025-07-07
Ingram Micro, the world’s largest IT distributor, suffered a major ransomware attack in July 2025, forcing global platform outages and revealing systemic supply chain vulnerabilities. The SafePay group has claimed responsibility for the incident, which has sent shockwaves through the IT channel and prompted urgent reviews of supplier resilience across the sector.
Image credit: Created for TheCIO.uk by ChatGPT
On 6 July, Ingram Micro publicly confirmed a ransomware attack had compromised parts of its internal systems. The company responded by isolating affected environments and engaging external cybersecurity experts to assist with the investigation. Law enforcement was also brought in as Ingram Micro began notifying its extensive global partner network.
The SafePay ransomware group quickly claimed responsibility for the attack. Industry sources indicate that the group exploited a vulnerability in Ingram Micro’s GlobalProtect VPN infrastructure, using compromised credentials to gain access. The method fits a growing pattern of attackers targeting remote access platforms, particularly where security controls such as multi-factor authentication are not uniformly enforced or where critical patches are outstanding.
As a result of the attack, Ingram Micro was forced to take offline several key platforms, including its Xvantage AI-powered distribution portal and the Impulse licence provisioning system. These outages immediately affected IT resellers, managed service providers, and enterprise customers who depend on Ingram Micro for just-in-time delivery and centralised procurement.
Customers reported significant disruption, including difficulties placing and tracking orders, and many expressed frustration at the lack of initial communication from the company. The timing of the attack, coinciding with the end of the financial quarter, amplified concerns over delayed shipments, billing backlogs, and the knock-on effects on client projects.
Financial analysts estimate Ingram Micro could lose up to $136 million in daily revenue while core systems remain unavailable. The disruption also prompted some enterprise clients to explore alternative suppliers, concerned about the risk of future single points of failure.
The impact of the ransomware attack quickly rippled through the IT supply chain. Ingram Micro is not just a single supplier; for many in the technology sector, it represents the backbone of procurement and distribution. When an organisation of this scale is compromised, the aftershocks extend far beyond its own customer base, affecting thousands of businesses globally.
Project deadlines, service level agreements, and even regulatory compliance were suddenly under threat as customers struggled to access products and services. The event has reignited debate about the risks of supplier concentration, with many organisations now revisiting their procurement strategies and continuity plans. Questions around business continuity, contract language, and supplier transparency have moved to the top of the boardroom agenda.
In the wake of the incident, it is clear that effective supply chain security now requires an understanding of not only one’s own cyber posture, but also that of critical partners. Business leaders are considering whether their existing contracts provide sufficient safeguards around incident notification, resilience testing, and exit routes should a major supplier face operational paralysis.
The attack on Ingram Micro is the latest in a series of high-profile ransomware incidents targeting supply chain lynchpins. It serves as a reminder that even global leaders in IT distribution can be caught out by sophisticated adversaries leveraging increasingly advanced techniques. The event has sparked renewed scrutiny of remote access infrastructure, with security teams across the sector reviewing the use of VPNs, patch management policies, and authentication methods.
At the same time, the response to the incident has underscored the need for clear, timely communication with customers and partners during a crisis. The early hours of uncertainty only heightened anxiety among clients, reinforcing the importance of transparency in maintaining trust.
For IT leaders and aspiring CIOs, the Ingram Micro case is a sobering illustration of modern cyber risk. It highlights the interconnectedness of today’s digital supply chains and the need for operational resilience—not just within one’s own walls, but throughout the partner ecosystem.
From a technical expert’s perspective, the Ingram Micro attack is a textbook example of how quickly a security lapse can spiral into large-scale disruption. The breach, reportedly exploiting a remote access vulnerability, is a reminder that even mature enterprises remain vulnerable to overlooked gaps and evolving threats.
This incident shows that patch management and robust authentication protocols are not simply regulatory boxes to be ticked, but fundamental defences that must be woven into daily operational practice. The sophistication of modern ransomware groups also means IT teams need to adopt an “assume breach” mindset—actively hunting for threats, not just passively defending the perimeter.
Supply chain risk is now a board-level conversation, and technical leaders have a seat at the table. This means building relationships with key suppliers, setting clear expectations for transparency and incident reporting, and ensuring resilience is a shared objective. Regular supplier audits, simulation exercises, and clear escalation paths are no longer “nice to have” but essential business practices.
Finally, this episode is a lesson in communication. The speed and clarity with which an organisation responds—both internally and with customers—can make a material difference to how the crisis is perceived and managed. For IT leaders, developing both technical and communication skills is vital as the boundaries between IT and business resilience continue to blur.
#CyberSecurity #Ransomware #SupplyChain #ITOperations #IncidentResponse
2025-07-07
"Apprenticeships offer a powerful, underused route into ICT and cyber roles by focusing on real-world capability over credentials. Ben Meyer argues that tech leaders must invest in potential to build diverse, resilient teams equipped for the challenges ahead."
The pace of innovation in tech is relentless. Cloud infrastructure, cyber threats, AI and digital platforms are all evolving in real time. To keep up, we often look to emerging tools, frameworks and providers.
But what if the most important innovation opportunity isn’t a piece of software — it’s how we find and develop the people behind it?
Our industry has long operated on a default setting: academic qualifications plus experience equals capability. But that logic is flawed. Talent doesn’t follow a formula — and some of the most capable technologists I’ve worked with got their start through an apprenticeship, a career change or a non-traditional route.
We’ve created a tech hiring culture that’s simultaneously competitive and constrained. We demand 3–5 years of experience for “entry-level” jobs. We filter CVs based on keywords and degree classifications. And then we’re surprised when we struggle to fill roles or build diverse teams.
Apprenticeships challenge this model. They allow people to develop real-world skills while earning a wage, gaining experience and building confidence. But more importantly, they represent a broader philosophy: that potential matters as much as polish.
In my work as a BCS assessor, I meet candidates from all walks of life — ex-retail staff, school leavers, parents returning to work, career switchers. Many arrive with imposter syndrome, unsure if they “deserve” a place in tech. Yet time and time again, they prove they do. Not because of where they’ve been, but because of where they’re going.
One candidate I assessed had worked in logistics before joining a digital support apprenticeship programme. No degree, no prior experience in IT. But they came prepared, having documented their projects and learned to script solutions for onboarding new staff.
Another candidate, who had previously worked in hospitality, demonstrated clear cybersecurity thinking — not because they’d studied it at university, but because they’d self-taught, practised risk modelling and brought their understanding of people and process into their final assessment.
These are not exceptions. They are proof that capability is everywhere — and that traditional hiring filters are often too blunt to spot it.
Let’s be pragmatic for a moment. Beyond the moral and social case, there is a clear business case for apprenticeships:
For organisations dealing with persistent cyber threats, complex infrastructure demands, and the pressure to modernise legacy systems, investing in hands-on ICT and cyber talent is not just beneficial — it’s essential.
As senior tech leaders, we’re in a unique position to open doors — or close them. The hiring policies we support, the progression paths we build, and the narratives we tell about success all shape our culture.
Here’s what I believe we should be doing:
We can’t say we value innovation if we only hire people with the same background and experience as ourselves.
The future of tech should reflect the full diversity of our society — not just in ethnicity, gender or background, but in thought, experience and perspective.
If we want to solve complex problems, we need problem-solvers who see the world differently. Apprenticeships are one of the best ways to achieve that — and the impact extends far beyond the workplace.
They create career mobility. They increase confidence. They provide a sense of purpose and belonging. And they show that your worth in this industry is defined not by where you started, but by how far you’re willing to go.
The next brilliant engineer, security lead or systems architect might be out there today working in a call centre, waiting tables, or managing stockrooms. With the right support, they could be leading technical innovation tomorrow.
Let’s stop gatekeeping talent. Let’s invest in potential — and build a better future for tech.
2025-07-06
The new GOV.UK app brings public services together in a single, user-friendly platform. With strong cyber security, accessibility features, and real efficiency gains, it sets a new benchmark for digital government. Notably, it’s among the first UK public sector apps to integrate AI-powered support—demonstrating that artificial intelligence is more than just the latest buzzword.
Image credit: Department for Science, Innovation and Technology
Cyber security is central to the GOV.UK app’s design. The One Login system provides robust authentication, including facial recognition and biometrics, instead of traditional passwords. All data is encrypted in transit and at rest, and the app undergoes regular security testing with support from the National Cyber Security Centre. A clear incident response plan is in place, with prompt user notifications if issues arise. The planned digital wallet feature will be subject to even stricter reviews.
Accessibility is a core principle, not an afterthought. The app is fully compatible with screen readers, features high-contrast themes, and lets users adjust font sizes for readability. Clear, jargon-free language ensures everyone can understand and use the app. Keyboard navigation is built in, and support for Welsh and other languages is on the way. User feedback is encouraged and will drive ongoing improvements.
The GOV.UK app serves as a one-stop shop for everything from tax and benefits to local council services. It reduces the need to navigate multiple sites or complete paper forms. Personalised notifications keep users informed of key deadlines like MOT or passport renewal, and the upcoming digital wallet will reduce paperwork even further. All this streamlines government processes and is expected to bring substantial savings.
Artificial intelligence is everywhere—in IT, in non-IT offices, and now in public services. The GOV.UK app is embracing AI in a practical way, beyond the hype. A generative AI chatbot, arriving later in 2025, will help guide users through complex tasks, answer frequently asked questions, and reduce the burden on support centres. Unlike earlier chatbots, this version aims to be genuinely helpful and conversational.
Behind the scenes, integration of AI and IT is significant. Bringing together systems from central and local government, supporting secure logins, managing notifications, and enabling features like the digital wallet all require strong IT architecture. The app uses scalable cloud infrastructure and is subject to ongoing audits for resilience and compliance.
While digital is the way forward, it’s not for everyone. The government is maintaining traditional contact channels and supporting digital skills initiatives. Privacy remains a top concern, with full compliance with UK GDPR and the Data Protection Act, plus clear user controls over personal information. The app is currently in public beta, with real user feedback shaping its evolution.
The GOV.UK app is a significant step forward for digital public services in the UK. By combining robust security, accessibility, efficiency, and AI integration, it sets a new standard—showing that digital government can be both innovative and inclusive.
#DigitalTransformation #CyberSecurity #GOVUK #PublicSector #Accessibility #AI
2025-07-01
Technical controls are essential, but culture is what actually makes them effective. Drawing on NCSC guidance and real-world experience, here’s why cyber resilience starts with people and attitude, not just process or technology.
Image credit: Created for TheCIO.uk by ChatGPT
Technical controls are essential, but culture is what actually makes them effective. You can invest in all the firewalls, monitoring tools and policies you like—if your people aren’t on board, you’re still vulnerable.
If you ask any security leader for their biggest risk, most will quietly admit: it’s not the latest exploit, it’s everyday behaviours and attitudes. One careless click can undo years of investment.
I’ve seen it myself—organisations with every bit of security kit money can buy, but still one well-intentioned member of staff clicking a dodgy link brings everything undone. The truth is: people are at the heart of every breach, every response, and every successful recovery.
Culture isn’t an add-on to your controls. It’s what gives them value in the first place.
The National Cyber Security Centre (NCSC) is blunt about this. Their guidance on the human factor says most successful attacks are down to ordinary people making ordinary mistakes, not some “Hollywood” hack.
The NCSC’s frameworks—like Cyber Essentials—are as much about bringing people with you as they are about ticking technical boxes. Leadership visibility, openness, and a willingness to learn are non-negotiable. Their message is universal: build a culture where people feel able to challenge, question, and admit mistakes without fear.
Let’s be honest: policy is easy, behaviour is hard. We’ve all worked somewhere with a ten-page password policy everyone finds ways around. You don’t win hearts and minds with laminated posters or e-learning modules done with the sound off.
Real change starts when people want to do the right thing—not just because they’re told to, but because they understand the why. When colleagues know they won’t get their head bitten off for reporting a slip-up, and sharing a near-miss actually leads to positive change, you’re making progress.
It doesn’t matter how many times you say “cyber is everyone’s job”—if leaders treat it as a tick-box or an afterthought, staff will do the same. Leaders have to show up.
Make cyber risk a standard agenda item, not just for IT, but for the whole organisation. Celebrate when someone reports a suspicious email or spots a permissions issue before it becomes a problem.
The NCSC is clear: leaders must be visible, approachable, and genuinely engaged in the details—not just the headlines.
Here’s what I’ve seen work—and what I try to do myself:
Make training relevant and regular
Not the same tired PowerPoint every year. Use real stories, examples, and open Q&A.
Reward the right behaviours
Celebrate “good catches”. Positive reinforcement always beats shaming mistakes.
Normalise talking about risk
It’s not negative to ask, “What’s the worst that could happen?”—it’s good risk management.
Involve every department
It’s not just IT’s problem. Every team has their own risks and perspectives.
Share near-misses and lessons learned
Encourage people to talk about what almost went wrong, so everyone can learn.
Review incentives and targets
Are you rewarding speed at the expense of safety? Be honest about what you’re actually encouraging.
Measure culture, not just controls
Look at engagement in training, near-miss reports, and honest feedback. If you aren’t measuring it, you aren’t managing it.
A while ago, I worked with an organisation that rolled out new security tools every year. But it wasn’t until they introduced “story sessions”—safe spaces where anyone could share a near-miss or lesson learned without fear of blame—that things genuinely changed. Incidents dropped, engagement shot up. It was the culture of openness, not technology, that made the difference.
If you do just one thing after reading this, make it a conversation: ask your team where they feel unsure or unsupported around security. You’ll learn more in ten minutes than from any audit.
Culture isn’t a project with an end date—it’s something you have to live and lead, every day. You can spend millions on technology, but your strongest defence is always a team that cares and feels empowered to do the right thing.
The NCSC get it. It’s time we all did.
For more on this theme, see: From Awareness to Ownership: Building a Cyber-First Culture.
2025-06-30
"Exploring the unique cybersecurity challenges facing financial firms, and why the sector remains a prime target for cybercriminals."
Image credit: Freepik
Cybersecurity is rarely out of the headlines these days. For financial companies, however, it’s not just a trending topic – it’s an ever-present concern that keeps leaders awake at night.
Financial institutions sit at the intersection of money, data, and trust. They hold vast reserves of sensitive information – customer details, transaction data, and payment records. Cybercriminals know this, which is why banks, investment firms, and insurers are under constant attack.
It’s not just about money. A successful attack can also damage a company’s reputation, shake customer confidence, and in some cases, threaten the stability of the entire financial system.
Attackers are relentless, constantly evolving their tactics. Today’s threats include:
Unlike other industries, financial services have a duty to maintain public trust at all costs. Any sign of weakness is quickly seized upon by competitors, the media, and customers alike. The sheer volume of transactions, the complexity of legacy systems, and the pace of regulatory change make the job even harder.
While the threat landscape is daunting, there are reasons for optimism:
Financial firms lose sleep over cyber attacks because the stakes are uniquely high – both for their own business and for the stability of the wider economy. By building a culture of resilience, embracing new technologies, and working together, the industry can stay one step ahead of those who seek to do harm.
2025-05-29
Adidas has confirmed a cyber attack resulting in the theft of customer contact information, specifically targeting individuals who had contacted its help desk. While payment details and passwords were not compromised, emails, phone numbers, and other contact details have potentially been exposed. This is the latest in a run of high-profile retail breaches.
Image credit: Created for TheCIO.uk by ChatGPT
Adidas’ disclosure comes only weeks after similar incidents at Marks & Spencer and Co-op. The M&S cyber attack alone is expected to cost around £300m—about a third of the company’s annual profit [Financial Times]. Retailers are facing a wave of attacks from sophisticated, well-organised threat actors.
UK police are investigating the Scattered Spider group for some of these attacks, though there is no evidence linking them to Adidas [BBC News]. Adidas has also faced breaches in other markets this year, underscoring the scale of the challenge.
It’s a mistake to assume only the loss of payment data matters. The exposure of contact details—email addresses, phone numbers, and more—creates real and ongoing risks:
This breach was enabled by an attack on a third-party customer service provider—a common and often underestimated threat. The UK National Cyber Security Centre consistently highlights the importance of supplier risk management, with many recent breaches beginning at partners or vendors.
UK GDPR requires organisations to notify regulators and those affected if there’s a risk to their rights or freedoms. Adidas is communicating with authorities and customers, but as consumer group Which? points out, post-breach support and guidance are just as crucial as technical fixes.
Retail’s digital expansion and dependence on third parties ensure it will remain a prime target for attackers. Cyber security must be embedded in organisational culture and treated as a board-level concern.
#CyberSecurity #Retail #Adidas #InfoSec #GDPR #DataBreach #RiskManagement
2025-05-03
"The recent M&S cyber incident is a stark reminder that no business is immune—and every organisation should review its security posture."
Image credit: Dorset Live
News broke today that Marks & Spencer has been hit by a significant cyber attack, sending ripples through the UK retail sector and beyond. While details are still emerging, early reports suggest that customer data and core business systems may have been compromised, with M&S racing to contain the fallout and reassure its millions of customers.
M&S isn’t just any retailer; it’s a British institution with a reputation built on trust and reliability. The scale of this incident, and the immediate disruption to services, is a stark reminder that even household names are not immune to the ever-evolving threats facing every organisation today.
While the investigation is ongoing, initial information points to a sophisticated cyber attack targeting both customer-facing and internal systems. This kind of breach highlights just how interconnected and complex modern IT estates have become, and why a “set and forget” approach to cyber security no longer works.
Cyber attacks can happen to anyone.
Size, reputation or investment in technology are not guarantees of safety.
Customer trust is fragile.
A single incident can undo years of careful brand building and erode customer confidence overnight.
Preparation is everything.
Robust incident response plans, tested backups and regular employee training are now non-negotiable.
M&S is working closely with law enforcement and cyber experts to investigate the breach and shore up defences. The wider message to UK businesses is clear: now is the time to double-check your own cyber resilience. Don’t wait for a crisis to put your plans to the test.
We’ll keep you updated as more details become available. In the meantime, is your organisation prepared for a similar incident?
2025-01-15
"For small and medium-sized enterprises, the right MSP can transform IT from a headache into a strategic advantage."
For small and medium-sized enterprises (SMEs), IT can sometimes feel like a constant uphill battle. There’s never quite enough time, resources are tight, and keeping pace with new technology trends can feel impossible. That’s where Managed Service Providers (MSPs) really come into their own.
An MSP is essentially an external partner who takes responsibility for some or all aspects of your IT estate—everything from daily support and monitoring to cybersecurity, backup, and strategic advice. For SMEs, this isn’t just about outsourcing technical problems; it’s about unlocking real business value.
Cost Efficiency:
Most SMEs can’t justify a full, in-house IT team. MSPs give you access to a broad range of skills and experience, but only when you need them. This flexible approach helps you avoid unnecessary overheads.
Proactive Support and Security:
Instead of just reacting to problems, good MSPs spot issues before they escalate. That means better uptime, faster response times, and a reduced risk of cyber threats.
Focus on Core Business:
Let’s face it, most SMEs aren’t in business to manage servers or patch laptops. Handing over IT operations allows your team to concentrate on growth, innovation, and customer experience.
Access to Latest Technology:
MSPs keep up with trends so you don’t have to. Whether it’s adopting cloud services, rolling out remote working solutions, or enhancing security, you get the benefit of new tech without the learning curve.
Strategic Guidance:
The best MSPs don’t just keep the lights on—they become trusted advisors. They’ll help you plan for the future, scale up (or down) as your needs change, and ensure IT underpins your long-term business goals.
Cybersecurity remains one of the biggest risks facing SMEs, yet many lack the expertise or resources to tackle it properly. MSPs bring a wealth of experience here, implementing best practices, monitoring for threats, and ensuring you meet compliance requirements. It’s peace of mind you simply can’t put a price on.
Not all MSPs are created equal. It pays to do your homework—look for partners with a proven track record in your sector, strong customer references, and a commitment to understanding your business. Communication is key: a good MSP should be an extension of your team, not just another vendor.
For SMEs, the right MSP can turn IT from a headache into a genuine strategic advantage. By tapping into external expertise, you’re free to focus on what you do best—knowing your IT is in safe hands. In today’s fast-moving, security-conscious world, that’s not just a nice-to-have. It’s essential.
2024-12-10
"How edge computing is changing the face of IT infrastructure, and why its benefits are too significant for businesses to ignore."
The benefits of edge computing are too significant to ignore.
In the ever-evolving landscape of information technology, the concept of edge computing has emerged as a game-changer, revolutionising how data is processed, stored, and analysed. As businesses strive for faster response times, improved reliability, and enhanced performance, the shift to edge computing represents a paradigm shift in IT infrastructure.
Traditionally, computing tasks have been performed in centralised data centres, where large amounts of data are processed and stored. While this model has served its purpose well, it is not without its limitations, particularly in an era marked by the proliferation of Internet of Things (IoT) devices, autonomous systems, and real-time applications.
Enter edge computing – a decentralised approach that brings computation and data storage closer to the source of data generation, whether it be a factory floor, a retail store, or a smart city environment. By leveraging edge computing, businesses can reduce latency, alleviate bandwidth constraints, and improve overall system performance, thereby enabling new possibilities for innovation and efficiency.
One of the key drivers behind the adoption of edge computing is the explosive growth of IoT devices. With billions of connected devices expected to come online in the coming years, traditional cloud-based architectures may struggle to keep pace with the sheer volume of data generated at the edge. Edge computing offers a solution by processing data locally, near the point of origin, before transmitting only relevant information to the cloud for further analysis and storage.
Moreover, edge computing holds immense potential for industries where real-time decision-making is critical, such as manufacturing, healthcare, transportation, and finance. By processing data at the edge, organisations can minimise latency and respond to events in near real-time, leading to improved operational efficiency, enhanced safety, and better customer experiences.
However, the transition to edge computing is not without its challenges. Managing distributed infrastructure, ensuring data security and privacy, and maintaining interoperability with existing systems are just a few of the hurdles that businesses must overcome. Moreover, edge computing requires a rethinking of traditional IT architectures and investment in specialised hardware and software solutions.
Despite these challenges, the benefits of edge computing are too significant to ignore. As businesses continue to embrace digital transformation and strive for competitive advantage, the shift to edge computing represents a logical evolution of IT infrastructure. By harnessing the power of edge computing, organisations can unlock new opportunities for innovation, agility, and growth in an increasingly interconnected world.
2024-06-10
"The recent London hospitals incident shows that the true impact of cyber attacks goes far beyond the IT department—and it’s time every organisation paid attention."
Cyber attacks are not just an “IT problem”—they can have serious ramifications for the entire organisation, whatever the sector. The recent attacks on London hospitals, widely covered in the media, are a stark reminder that operational disruption, patient care, and even public trust can be put at risk by a single successful breach.
Read the BBC article for more on the ground impacts.
A single vulnerability—whether it’s a human error or a system flaw—is all it takes for cyber criminals to gain entry. The group behind the recent attack has previously targeted automotive firms, Australian courts, and charities like the Big Issue, proving this isn’t just a healthcare problem. It’s an everyone problem.
To prepare for and help prevent cyber attacks, here are some key strategies:
User Training and Awareness
People remain the most unpredictable element in any security plan. No matter how strong your technical defences, all it takes is one person clicking a bad link or visiting a dodgy site to open the door. Ongoing training and awareness programmes are essential.
System Security Fundamentals
And the list goes on.
Disaster Recovery
If a breach does happen, a robust disaster recovery plan and up-to-date backups are absolutely critical. All too often, disaster recovery is tomorrow’s task until it’s too late. Make sure plans are current, tested, and that everyone knows what to do if the worst happens.
What do you think we should be prioritising? Is your organisation prepared for the next cyber attack?
2023-11-02
"IT professionals are essential for SME growth, security, and digital transformation—but do smaller businesses really recognise their value?"
Do SMEs know how IT can benefit them?
In an era driven by digital transformation, the role of Information Technology (IT) professionals has become paramount for businesses of all sizes. However, the question remains: do small and medium-sized enterprises (SMEs) and startups truly grasp the significance of IT professionals in their operations?
In the fast-paced world of entrepreneurship, SMEs and startups often find themselves juggling multiple tasks with limited resources. In such an environment, the value of IT professionals might not always be immediately apparent. Yet, overlooking the importance of IT expertise can have profound implications for the success and sustainability of these businesses.
First and foremost, IT professionals bring specialised knowledge and skills that are essential for leveraging technology to streamline processes, enhance productivity, and drive innovation. From setting up and maintaining network infrastructure to developing custom software solutions, IT professionals play a pivotal role in optimising business operations.
Moreover, in today's digital landscape, cybersecurity threats loom large, posing significant risks to businesses of all sizes. SMEs and startups are not exempt from these threats; in fact, they may be even more vulnerable due to limited cybersecurity measures. IT professionals possess the expertise to implement robust security protocols, safeguarding sensitive data and protecting against cyber attacks.
IT professionals contribute to strategic decision-making by providing insights into emerging technologies and trends that can give businesses a competitive edge. Whether it's adopting cloud computing solutions, harnessing the power of big data analytics, or implementing Internet of Things (IoT) devices, IT professionals help SMEs and startups stay ahead of the curve.
Despite the undeniable benefits that IT professionals bring to the table, there are challenges that SMEs and startups may face in fully recognising their importance. One such challenge is the perception of IT as a cost centre rather than an investment. However, viewing IT expenditures through the lens of long-term value creation can shift this mindset, highlighting the role of IT professionals as enablers of growth and efficiency.
While outsourcing IT services to Managed Service Providers (MSPs) can provide access to specialised expertise, partnering with an MSP offers unique advantages for SMEs and startups. MSPs not only bring technical know-how but also provide proactive monitoring, maintenance, and support services, ensuring continuous uptime and reliability. By entrusting their IT needs to an MSP, businesses can benefit from cost-effective solutions, scalable services, and peace of mind, allowing them to focus on their core operations and strategic objectives. This collaborative approach fosters a symbiotic relationship where SMEs and startups can leverage the expertise and resources of MSPs to navigate the complexities of the digital landscape effectively.
The bottom line is, SMEs and startups must recognise the indispensable role of IT professionals in driving their success and competitiveness. By embracing IT expertise as a strategic asset rather than a mere operational necessity, businesses can unlock a world of opportunities for growth, innovation, and resilience in an increasingly digital world. Investing in IT professionals is not just about staying technologically relevant; it's about future-proofing the business and laying the foundation for sustained success.