Some thinking on Artificial Intelligence

This piece began as a set of reflections following my involvement in an AI panel at the LAPG conference and has evolved through further thinking about what this means in practice for our sector. 

Setting the context: opportunity, risk and power

It is impossible to talk seriously about artificial intelligence without acknowledging the wider context in which it sits. The environmental impact of data centres, the extraction of resources and labour from the global south, the risks associated with automated decision-making, and the displacement of human judgement and human jobs are not side issues. They are fundamental questions about power, accountability and public good.

At the same time, it would be naïve (and probably irresponsible) to ignore the potential of AI to improve how we work. Used carefully and thoughtfully, it may help organisations reach more people, reduce delay, improve consistency, support over-stretched staff, and create better experiences for clients navigating complex systems. It may also offer efficiencies that matter in a sector under constant financial pressure.

All of these tensions will continue to play out over the coming years. For now, though, the most useful contribution is not another abstract debate but some practical reflections on what organisations can sensibly be doing now.

AI is not an IT project

The first and most important point is this: AI is not an IT project. Yes, technical expertise matters. But decisions about whether and how to use AI are fundamentally about risk appetite, ethics, accountability, cost, safety and organisational values. Those are management and governance questions. They belong with trustees, owners, directors and senior managers – informed by staff and technical specialists, but not delegated to them by default.

You are probably already using AI

A second uncomfortable truth is that many organisations are already using AI, whether they realise it or not. Some will have actively chosen to do so. Others will find it embedded within software they already use. In many cases, individual staff, consultants, volunteers or trustees will be experimenting with AI tools independently, often with good intentions and without any organisational discussion.

The starting point, therefore, is simply to understand what is happening. That means taking time to review current use: what tools are being used, by whom, for what purposes, and with what data. This is not about catching people out. It is about surfacing reality so that informed decisions can be made.

Understanding and assessing risk

Once you have that picture, the next step is a proper risk assessment. What risks does the current or proposed use of AI create for clients, for staff, for the organisation and for public trust? Are those risks acceptable? If they are, what mitigations are needed? If they are not, what needs to stop or change? For advice and legal organisations in particular, this inevitably includes risks around accuracy, confidentiality, data protection, professional obligations and client harm.

Data protection and DPIAs as practical tools

Where AI use involves personal data – and much of it potentially will – a Data Protection Impact Assessment is not an optional extra. It is a practical tool to help you understand how data is processed, where it goes, how secure it is, and whether its use is compatible with your obligations under UK GDPR. In many cases, the DPIA process itself will surface issues that prompt a rethink.

Making clear organisational decisions

Out of this should come some clear organisational decisions. Not vague encouragements or informal norms, but explicit choices about what is permitted, what is prohibited, and what sits in a grey area pending further review. This is where a policy matters.

A policy does not need to be long or technical. Its job is to set boundaries, clarify ownership, and make expectations clear. Procedures and guidance can sit underneath it and evolve over time. An example of a deliberately cautious policy might look like this:

A simple Artificial Intelligence (AI) Policy

We recognise both the opportunities and the risks associated with Artificial Intelligence (AI) and AI-enabled software.  Our approach is deliberately cautious and will remain under regular review as the technology, legal and regulatory landscape continues to evolve.

At present, no AI system or AI-enabled software may be used for organisational or client-related work unless it has been formally assessed through a documented risk analysis and explicitly authorised for use by the organisation.

Where any proposed use of AI involves the processing of personal data, a Data Protection Impact Assessment must form part of that assessment and approval process.

Only those AI tools listed on the organisation’s Approved AI Register may be used.  The register is maintained by [insert role], who also acts as the internal lead on AI governance and risk management.

The unauthorised use of AI or AI-enabled software by any director, employee, consultant, volunteer or trustee will be treated as a serious matter and may result in disciplinary action. 

This policy will be kept under regular review to ensure it remains aligned with legal requirements, best practice and the organisation’s risk appetite.

Keeping Pace with Change

Finally, none of this is static. The pace of change in this space is extraordinary. Regulatory guidance, professional standards and technical capabilities are evolving far faster than most organisational policies ever do. Keeping AI under review therefore means more than an annual policy tick-box. It means staying alert, sharing learning, and being willing to adapt as understanding improves.

A note of caution

A word of caution is necessary. In its current form, AI is not reliable. Many systems are opaque about how data is processed and where it is stored. Many are not designed for accuracy. They can and do get basic facts wrong, and they frequently invent material that looks convincing but is entirely false. This includes cases, citations, quotations and guidance.

AI can be useful, but it is not authoritative. It should not be trusted to replace professional judgement. Where it is used to assist drafting or analysis, there must always be meaningful human oversight, scrutiny and responsibility for the final output.

Handled carefully, AI may become a genuinely helpful tool. Handled casually, it poses real risks to clients, organisations and the wider justice system. For now, caution is not resistance to progress; it is a form of professional care.

Some practical top tips

Start by assuming AI is already in your organisation.

This is usually more accurate than assuming it is not.  Treat discovery as a neutral fact-finding exercise, not a disciplinary one.  You are trying to understand reality, not catch people out.

Keep ownership with senior leadership and trustees.

AI raises questions about risk, ethics, accountability and public trust.  Those are not technical questions and should not be delegated away, even where technical expertise is essential to informing decisions.

Be explicit rather than informal.

Unwritten rules about AI use create uncertainty and uneven practice. A short, clear policy, even a cautious one, is almost always better than silence.

Match ambition to capacity.

If your organisation does not have the time or expertise to properly assess, monitor and govern an AI tool, that may be a sign that you are not ready to use it yet. Caution is not failure.

Assume outputs are wrong until proven otherwise.

AI can sound confident while being incorrect.  Treat it as a drafting or thinking aid at most, not as a source of truth, and never as a substitute for professional judgement.

Use data protection processes as thinking tools, not hurdles.

A DPIA is often the most practical way of understanding what an AI system actually does with data.  If completing one feels impossible, that is usually telling you something important.

Review little and often.

Given the pace of change, a light-touch, regular review is usually more effective than a heavy policy rewrite every few years.

Some thinking points to sit with

What problem are we actually trying to solve?

Are we looking for speed, consistency, capacity, cost savings, or something else entirely?  AI should be a response to a defined need, not a solution in search of a problem.

Where would things go wrong if this failed?

Not in theory, but in practice. Who might be harmed, misled or disadvantaged if an AI tool gets it wrong, and how visible would that harm be?

Who carries responsibility when something goes wrong?

Not who clicked the button, but who is accountable organisationally, professionally and legally.

What does “good enough” look like for us?

Absolute accuracy and zero risk are unrealistic.  But different organisations will have very different thresholds for acceptable error, particularly where vulnerable clients are involved.

Are we being seduced by confidence and polish?

AI outputs often look impressive.  That does not mean they are correct, appropriate or safe.  Confidence is not competence.

How does this align with our values and public purpose?

Efficiency matters, but it is not neutral.  Decisions about automation, speed and scale have ethical dimensions, particularly in advice, legal and support work.

If we had to explain this decision publicly, would we be comfortable?

This is often the simplest and most effective test of judgement.

Next
Next

Community and Connections in the Specialist Advice Sector