The EU AI Act: 5 Tips That Every Digital Project Manager Needs to Know Before Using AI Tools
[with a FREE checklist]
AI has taken off like my teenage son’s appetite for homemade sausage rolls. 🚀
And I’ve no doubt that you’re starting to use AI tools more than ever. Those meeting notes now magically appear after your client calls - complete with an action list. Firefly brings creative ideas to life in record time. And your junior developers have gained 10 years experience of debugging code overnight by using Co-Pilot or ChatGPT.
AI tools are helping you and your team get more done, faster.
This article is about being compliant when using AI in your workflow for client work.
The EU AI Act
On 1st August 2024, the EU AI Act arrived, and it’s changing the game for anyone using AI in their work—even if you’re not building these tools yourself.
[Quick interlude]
To save you a load of boring repetition, I’ll just call the EU AI Act the EAA from now on.
And, although I’m referring to the EU in this article, much like the high standards of the GDPR, the EAA should get you on the right foot in any country when it comes to AI compliance best-practice.
Lastly, I’m not a lawyer and this article is not legal advice in any shape or form, so please don’t interpret it that way. Go speak to a data privacy lawyer or your company’s legal team if you need to. 🙏
Onwards 👇
The EAA sets out strict rules on safety, transparency, and data use. As a Digital Project Manager, you need to know how this law could impact you and your colleagues.
In this article, you’ll get a clear view of what it means and how to keep using AI in a way that’s safe, compliant, and within the bounds of your client contracts.
Understanding the Basics
The EAA is a legal framework designed to regulate AI across the EU. Its aim? To make sure AI is used safely, respects people’s rights, and is transparent. A bit like the GDPR.
The Act puts AI into four risk categories:
Unacceptable Risk: AI that’s banned due to clear harm (e.g., social scoring).
High Risk: AI that impacts health, safety, or rights (e.g., facial recognition).
Limited Risk: AI that needs transparency (e.g., chatbots that should reveal they’re not human).
Minimal Risk: AI with little to no risk. Generally exempt from heavy regulation.
Most of the tools you use—like Co-Pilot or ChatGPT for content or Firefly for visuals—probably fall under limited or minimal risk.
But be careful. You should never input sensitive content such as Personal Identifiable Information (PII), Personal Health Information (PHI), Financial Information or client confidential information.
You should also check the outputs for their source, to make sure the facts are correct and that you’re not infringing on copyright or intellectual property.
Stay on the right side of your Chief Privacy Officer
You might be using a version of these tools that’s been set up for internal use only, so information should not be going back into the public.
But check your company policies, only use tools that have been ‘validated’ by going through the proper privacy, compliance and legal checks. Along with NDAs and contracts being in place with providers.
Career-saving Principles
I find it helpful to keep these two basic principles in mind:
Data in - What information am I putting into the tool?
Data out - What information will I get out, and what am I going to do with it?
This sounds like common sense, but let it be the little voice in your head that says “wait a second” before you accidentally press ‘upload’ with that spreadsheet of your client’s KPI data.
Key Point: Know where each AI tool you use sits on the risk scale. If in doubt, speak to your compliance team, or just don’t use it!
5 tips to keep you on the right track 👇
Tip 1: Only Use Approved Tools
The gold standard for compliance is to only use tools that have been vetted by your organisation.
Those that have been approved for use, you’ve had the right training and have guidelines to follow.
You also need to know what your vendors are using and have the right subcontracts in place to make sure they follow your same guidelines.
Now, I’m assuming you have an individual (or a team) in place to do all of this. But if not, then the compliance check below is a good place to start.
Action: Go to the EAA website and use this handy Compliance Checker to see where your organisation's practices might sit with the EAA.
Tip 2: Make AI Transparent: Let Clients Know
Firstly, I’d highly suggest putting your company’s use of AI tools into your client contracts.
This way you’re setting expectations and nothing should come as a surprise.
The EAA pushes for transparency. This means you need to make it clear to clients (and sometimes their audiences) when AI is involved.
For example, if your creatives are using AI to generate images, and you present them to your client, then make sure it’s glaringly obvious that is the case. Use a watermark and add disclaimers to your presentation.
In addition, make sure you know the source of the images that are being generated. For example, Firefly sources images from the Adobe Image library (and out of licence imagery) but do you know where Midjourney is generating images from!?
You don’t want to get a phone call from someone’s lawyer for copyright infringement.
Action: Create a simple “AI Use” disclosure in your scope of work. This way, clients understand where AI could be used and why.
Tip 3: Balance AI with Human Oversight for Better Results
The EAA stresses the need for transparency and accountability.
AI tools can produce content very quickly, but lack your team's judgement. So don’t leave them to run on their own.
By adding human oversight, you can catch any errors, bias, or hallucinations and fact-check the source of its response.
Your team's involvement in reviewing outputs not only makes the content more reliable, but also demonstrates a level of accountability to reduce the risk of any unintentional errors or misrepresentation.
Action: Include an extra QC step in your process. Assign someone to read through the outputs, double-check sources, edit as needed and make sure your presentation has all the right checks and balances before being presented.
Tip 4: Tracking AI Usage: Keep a Simple Audit Log
Create a clear record of the AI tools you use and how you use them. If you ever need to show you’ve been using AI, a usage log makes this simple. Include the tool’s name, purpose, and any disclosures made.
For example, if you use Firefly to create campaign visuals, log the tool’s purpose and any client agreement about AI involvement. This record doesn’t need to be complicated, but should cover the basics to show you’re using AI compliantly.
Action: Go one step further. Record the exact prompt/input and output in an audit log. Put it in a Word document (or something similar) and give it a reference code so you can provide evidence on how a piece of work was created, in the event that you might need it.
Tip 5: Stay Ahead: Keep Up with Updates to the EAA
The regulations are evolving, and the EAA will change too. Stay up to date with these changes so you can spot new compliance needs before they impact your projects.
Action: Check out the EAA Substack to keep up to date with all the latest info on the EAA.
Your FREE loot with an extra tip
We all love a freebie. And if you’ve made it this far, then you certainly deserve it 🥳
To guide you along your journey, I’ve put together an AI checklist.
The document also contains a BONUS 6th TIP about using [prompts] to help you stay compliant.
It’s on Notion, which is an awesome tool for Project Managers.
So if you don’t have an account, then I highly recommend you sign up for the free version. You get a lot of features without dropping a penny!
Thanks for reading, and let me know if you have comments.