Beyond the bubble: can AI boost work without dulling skills or destroying the planet?
Every breakthrough technology forces trade-offs. The printing press spread knowledge but also propaganda. The car unlocked freedom and guzzled carbon. AI will do the same, and (with mass adoption) faster. So, how do we capture its benefits without surrendering judgement?
At BRODIE, a sustainability consultancy, we have been asking this question in three ways: what AI means for sustainability, for client work, and for personal development. The answers point to how we can use AI thoughtfully.
Sustainability: Is AI an environmental disaster?
BRODIE advises upon environmental and social issues because we care about making an impact. So, what to make of AI?
You may well have heard that AI is an environmental disaster, burning through vast energy and water use. While there is truth to this, the reality is more nuanced.
Individual use vs systemic use
For an individual, AI use is a rounding error. One useful graphic that has been doing the rounds:
In short, if you care about your environmental impact, then take one less flight, eat one less steak, and ride more trains. One (or 180,000) fewer prompts will not move the dial.
Transparency remains weak.
We only recently started seeing ‘per-prompt’ energy figures from AI companies. OpenAI and Google put them at 0.34 and 0.24 watt-hours of electricity respectively, or about the same as watching TV for less than nine seconds.
This doesn’t sound too bad, but these smaller per-prompt numbers exclude the heavy emissions and water required for training these models in the first place.
Efficiency brings emissions down as reasoning brings it up.
As engineers refine them, models are becoming very efficient. Google reports a 44x drop in prompt footprint over 12 months. But huge gains in efficiency are complemented by growing use of more intensive reasoning, image and video models.
The real determinant of net emissions will be scale of use. Google has reduced per-prompt emissions, but its overall carbon footprint has risen 51% since 2019 (or 65%, depending on who you believe).
In aggregate, we are dealing with a significant new impact. Impossibly vast new datacentres under construction are testament to this.
Avoided emissions are important but speculative
Even if AI uses a lot of energy, this does not mean that the world will experience a net increase in energy use; an intensive query could save hours of work and be environmentally positive. This is the magic of ‘avoided’ emissions.
Thinking on a larger scale, what if AI can help design, optimise and uncover new, innovative ways to reduce emissions?
Best estimates suggest AI could mitigate 5-10% of global emissions by 2030. However, these are speculative and cannot be treated as guarantees, or as blank cheques for indiscriminate use.
To summarise:
The environmental impact of AI is complex.
As an individual, focus first on other far larger decarbonisation levers.
Regulators (and companies) will face tough choices balancing AI growth with climate targets.
Environmental impact sets the backdrop for AI in society and our thinking on it. But as consultants, we face another practical question: can these tools be trusted to create quality client work?
Client work: Is AI a blessing or liability for consultants?
If sustainability matters to whether AI should be used in general, confidentiality and judgement determine if we should adopt it in our own practice.
Confidentiality is non-negotiable, but poorly understood
To start with the red lines, free chatbots are not the right place to put confidential client information. This is more of a risk than many senior leaders realise: even if a business does not think it is using AI, its employees likely are. The term ‘shadow AI’ was coined to capture this rise in unsanctioned use.
BRODIE will not put client confidential data into consumer chatbots – period. If an organisation is considering using AI, ensure you are signing up to tools with enterprise data protection and clear written commitments that data is not being used to train foundation models. Microsoft 365 Copilot makes that commitment, which is why the UK Civil Service is slowly adopting it.
Copyright and training data are unsettled
Beyond the legal implications of what you yourself upload, spare a thought for the companies themselves. ChatGPT and other large language models (LLM’s) are trained on vast reams of data, and major cases are moving through courts in the US and UK to hash out whether this wholesale scraping of information (much of it copyrighted!) is illegal.
These lawsuits are spearheaded by angry companies with good lawyers and shrinking site number visits (e.g., The New York Times).
As a small firm with creative output, we also produce IP that we wouldn’t want stolen; the sensitivity is understandable.
Judgement is the product
On the one hand, AI is becoming more significantly more intelligent, at a rapid pace.
On the other hand, these systems ‘hallucinate’. This is the polite way of saying that they frequently make up information while sounding like confident experts. There are some indications that this problem is intrinsic to LLM’s, and might be getting worse. Additionally, while these systems are great at synthesizing training data, it isn’t clear that AI can generate genuinely novel ideas and insights (yet).
So, can it soon replace the consultative work?
The value BRODIE brings is not text or slides – it is judgement. Tools can certainly lift productivity; a UK Civil Service pilot of Microsoft Copilot saved participants an average of 26 minutes a day. But for now, knowledge work still depends on human judgement, with AI in a supporting role.
It is possible that AI will outstrip human judgement at some point in the next decade or two. Some even think it’s likely. As with any new breakthrough, we need to separate signal from noise. Seemingly unlimited ‘AI-driven’ startups will appear, most will collapse, and valuations of the bigger players may also sharply deflate. But a bubble bursting does not mean technology disappearing; the internet survived the dot-com crash and reshaped everything. AI will follow a similar trajectory.
To summarise:
AI capabilities are rapidly advancing. What it can’t do today, it may do tomorrow.
AI is increasingly powerful for your knowledge work if used to efficiently complete great work yourself.
It can be a detriment if you use it to do ‘great’ work for you.
For now, even if AI cannot replace consultants outright, it can still change how individual professionals work and learn.
Personal practice: Is AI a recipe for efficiency or laziness?
AI can make you faster; it can also erode skill development if misused.
The trade-off for efficiency is learning. Especially early on in a career, growth often comes from struggling, failing, and learning from mistakes. More junior employees who use AI risk leaning on it as a crutch to produce work which sounds impressive but is ultimately hollow. The social consequence is not just on you: if you hand AI-generated ‘work slop’ to colleagues, they have to deal with/rework it, which lowers rather than boosts productivity and morale.
It might be, therefore, that AI is most useful for the group which studies indicate are most cautious in adopting new tech: older professionals. With years of experience and judgement under their belt, they are well placed to use AI as an accelerator rather than substitute.
To summarise:
Be wary of ‘cognitive offloading’ while recognising the ways that AI can help facilitate growth.
If tools are being used as a thought partner and enabler, this can be a huge boost to professional growth.
If it is drafting all your documents and slide decks, it probably is not.
Conclusion
I’ve walked through three levels of analysis: systemic, organisational and personal. Taken together, they point to a consistent theme. Use AI where it strengthens your ability to think, and avoid it where it erodes trust, quality or your own capacity for cognitive labour.
On a practical level:
We use AI to increase efficiency, not replace judgement
We keep client data within governed systems.
We cite (and check claims against) primary sources only
We stay literate about impacts and risks in a fast-moving field
The right stance lies between disavowal and blind adoption. Vigilance can make AI serve work rather than the other way round.
Disclaimer: This article was written entirely by me (a human!). I asked ChatGPT for critical feedback on an earlier draft, some of which I implemented.
Written by Benjamin Sobel, October 2025