A total Recall for data protection

What matters

What matters next

Microsoft has relaunched its “Recall” feature on some AI-powered PCs and laptops. What are the data protection concerns behind its rapid withdrawal and cautious relaunch with additional safety features?

Microsoft has just announced that it has begun its relaunch of the “Recall” feature on some of its AI powered PCs and laptops. The feature, which automatically takes a continuous series of screenshots of user activity, was withdrawn soon after launch in May 2024 due to concerns about privacy. The new rollout will be gradual, reaching Europe sometime later this year.

Through screenshots and AI-enhanced processing, the feature can automatically “remember” content, including photos, browser history, app and messaging activity, which is then indexed and made searchable using natural language. This means that a user can (for example) find information found months ago on a random website just by searching in Recall for “thin crust pizza”. This memory prompt will help someone who vaguely remembers finding a useful recipe some time back but has no idea where.

Why was the functionality rapidly withdrawn and cautiously relaunched only now, with additional safety features?

Data protection concerns

The list of data protection concerns was long. As well as enabling malicious third parties to trace user activity, some called it a gift to hackers: for example, by capturing bank account numbers and making them accessible to third parties.

Screenshots make the security community nervous as it makes temporary access permanent and stores a sender’s data without their knowledge. In secure environments you give up your mobile phone at the door because of the security risk. So alarm bells rang when screenshotting was baked into everyone’s laptop.

Does data protection law apply?

European and UK data protection law, founded in the (UK) GDPR, only applies where processing of personal data is for work use (Article 2(c)). Perhaps recognising this, Recall is aimed for the moment at personal PCs and there are strict controls for use in work and in school.

Organisations should still think through the implications of such new AI-powered features as they may soon become a routine part of new computers, laptops and phones. The UK data regulator, the ICO, was certainly concerned enough to issue a formal statement about it last year.

Security

Looking through the lens of privacy and data protection, has Microsoft done enough?

On the overall security issue, (Article 32 of the UK GDPR), Microsoft has worked hard to address concerns. The feature now requires user opt-in, it can be switched off at any time and can automatically filter out sensitive content. Importantly, for multi-use PCs, snapshots are not shared between users and each has individual control over Recall settings.

Recall can only be enabled when at least one secure sign-on method is in use (possibly biometric). The software has been improved and data is held in encrypted form. 

But deep in the explanation (and there’s a lot of it – more than 10,000 words) things get a touch complex. A filter is in place to remove snapshots of potentially sensitive information such as passwords or credit card numbers. In addition, apps and websites can be manually filtered out by users. However, these safety features require a lot of proactive management and have significant limitations, not necessarily working where information is embedded from another website, or when snapshots include open browser tabs, or even on certain browsers at all.

Data sharing

When it comes to controls over data sharing, (raising issues of lawfulness, fairness and transparency of processing in Articles 5 and 6) the good news is that information is kept on-device only.  However, care must still be taken, as although text summaries are created locally via an on-device small language model (SLM), information is sent externally when Recall is used in combination with other tools such as when users ask for related information online, or want image manipulation via third party apps like Paint.

And beyond the laptop owner, it remains an open issue whether other people’s personal data should be stored in large amounts and accessible ways without their knowledge, and for such arguably trivial reasons. Data protection law requires that processing is lawful and demands rigour and transparency about data sharing, retention and use. Where laws apply it will be difficult to justify the necessity of processing and answer purpose limitation concerns where the technology aims to make life very slightly easier for the forgetful user.

DSARs

As in so much clever new tech, where information is indexed, stored and shared across platforms there is the perpetual difficulty of responding to data subject rights requests (DSARs, Article 15) as explained in this Shoosmiths article. One welcome development is that screenshots can now be deleted at any time. This is a positive move for anyone trying to exercise rights over captured data, although device owners in the professional context should remember that rights to delete personal data are suspended once a DSAR is sent. Yet more proactive management required.

Where does this leave us?

Overall, the relaunch opens some important questions.

For individuals, with recent reports of people being refused entry to the US because of their device messaging history, the creation of an easily searchable and dynamic “story” of the user and their contacts may feel like a step in the wrong direction.

For businesses, it will be ever more important to understand exactly what that new AI-powered kit may be capable of. The UK government has just issued a Code of Practice for understanding basic cybersecurity for boards of medium and large companies: an important step, but also testament to the huge IT knowledge gap already opening up for key decision-makers. As with AI transcription, the first step is to look at internal policies for use, discussed in Is AI transcription a DSAR time-bomb?

A question of control

For society at large, can fundamental data protection principles, particularly data minimisation, survive as technology enables processing of personal data in new ways we barely understand or control? Plenty of new developments – not least training AI systems, automated decision-making, and the use of blockchain, already exist in a world speaking a different language from the GDPR.

In such a world, who is doing the data protection regulation in practice? Many consider that much of the control is down to the tech companies. Microsoft states: “models have undergone fairness assessments, alongside comprehensive responsible AI, security and privacy assessments, to make sure the technology is effective and equitable while adhering to Microsoft’s Responsible AI best practices.”

Good to know, but with increasing political pressure on technology companies, it may be time to consider recognised professional standards for AI developers. Other potentially harmful innovations, like robotic surgery, don’t rely on self-regulation, or on an outcry from online commentators.

And while businesses strive to understand what new technology is out there, and how to use it without breaking the law, it may be time for a fresh look at whether data protection rules – especially the principle of data minimisation and the practical exercise of data subject rights - should evolve, alongside the extraordinary capabilities now being put into our hands.

Disclaimer

This information is for general information purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. Please contact us for specific advice on your circumstances. © Shoosmiths LLP 2025.

 

Insights

Read the latest articles and commentary from Shoosmiths or you can explore our full insights library.