티스토리 수익 글 보기

티스토리 수익 글 보기

You can ask AI to write the code, but the hard part is… everything after that
Open Thinkering

You can ask AI to write the code, but the hard part is… everything after that

You can ask AI to write the code, but the hard part is... everything after that
Photo by Markus Spiske / Unsplash
“I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code.” (Dario Amodei, CEO and co‑founder of Anthropic, March 2025)

A year ago, most people – including me – thought that Amodei’s presdiction was verging on the ridiculous. Now, 10 months later, not so much

Drew Breunig calls whenwords “silly” but, like him, it’s got me thinking. It’s a relative time formatting library that exists as pure specification. That is to say, no code at all, just detailed specifications and language-agnostic test cases; AI agents can implement these on demand across whatever language they like – Ruby, Python, Rust, Elixir, etc.

Meanwhile, Salvatore Sanfilippo (antirez) spent his evening with Claude Opus 4.5 and watched it implement new features, fix test failures, and build libraries faster than he could have alone. His conclusion? “[W]riting code is no longer needed for the most part.”

We’re getting to the stage where Amodei’s prediction is reality. What should we do about it?

Where AI belongs and where it doesn’t

Instead of thinking about software as a monolithic thing, we need to think about it living in layers. AI does not belong equally in all of them.

I don’t pretend to be an expert, but modern software typically sits across:

  • Infrastructure – cloud platforms, containers, databases, networking, the systems that keep everything operational
  • Frameworks and libraries – the foundational code depended on by many applications depend on, so highly performance-sensitive and subject to lots of scrutiny
  • Applications – where most product-specific functionality lives, often where the real variety and change happen
  • User-facing configurations – scripts, workflows, dashboards, automations, and the countless small tools that make organisations work

AI today is already generating large portions of the application and configuration layers. It’s starting to design infrastructure-as-code, usually usually under human supervision. At the framework and core library layer, things are a bit more cautious: there are concerns around performance, security, and the fact that millions of other systems may depend on it.

Different layers have different needs, and AI can’t provide the support and community care that’s required for some layers.

Who controls what gets built?

My focus in this post isn’t developers. It’s people who can’t code. For the first time, you can turn a business or operational problem into working software without passing through technical gatekeeping. But should you? And in which scenarios?

Charity CEOs

Two women in suits standing beside a whiteboard
Photo by wocintechchat / Unsplash

Let’s imagine you run a small or mid-sized charity. Your programmes generate data from client interactions, funding records, volunteer hours, and so on. At the moment, getting useful reports out of your systems is slow and often depends on a contractor, manual copy/pasting, or overstretched IT support.

Current AI tools can already generate database queries, reporting scripts, and spreadsheet automations from natural language descriptions. You can say “show me which client cohorts had the highest retention rate by programme over the last 12 months” and get something workable in minutes rather than weeks.

With great power comes great responsibility, though. You are now in a position to make data-driven decisions based on queries you did not write and may not fully understand. You have to check results, question surprising outputs, and be clear when you’re not sure about something.

You also need to think about data protection. Most powerful AI services are using cloud infrastructure that, in some cases, retains prompts for logging or model improvement. So before pasting sensitive client data into any system, you need to know where it goes, whether it can be deleted, and how that aligns with your legal and ethical obligations.

If you build reports this way and rely on them, you should keep copies of the queries or scripts, document what they do, and have at least a basic plan to recreate them if a vendor changes pricing or terms or goes offline.

Consultants

Woman in grey sweater using laptop while someone else looks on
Photo by Cherrydeck / Unsplash

Let’s say that you work in a co‑operative or agency helping small businesses or social enterprises. A client comes to you needing a system to track project timelines, identify bottlenecks, and send alerts.

Previously, you might recommend a generic SaaS tool, look at deploying an Open Source option, or suggest they find budget to pay a dev shop to build a bespoke system. Now you can sit with them, map the workflow, and use AI tools to generate a working prototype in a few sessions.

This lets you offer something closer to what they actually need, shaped by your shared conversations rather than whatever a vendor decided.

But it also means you are taking on some of the responsibilities that would previously sit with a technical team. You need to think about things like maintenance: is the system understandable to others in your co‑op or agency, or is it bound to your context (and prompts)? It needs to be tested under realistic conditions, rather than just trust that if it runs once it will behave well in every scenario.

There’s a structural question here too: are you building on a proprietary platform that locks the client in, or on more open foundations that they can move away from later? Both choices come with trade-offs around cost, convenience, and resilience. The point I’m making is that you are now part of that decision.

Educators

 A woman in a green dress holding a pen in front of a projector screen
Photo by Marina Nazina / Unsplash

Now let’s imagine that you teach young people or adults. You’ve begun to use AI to help prepare lessons, grade assignments, generate discussion prompts, provide feedback on student work, and sometimes to help students understand difficult concepts. This is already happening, with many teachers report using AI tools for at least some of these tasks.

I’ve been out of the classroom now for over 15 years, but I can imagine the empowerment would be real. You can generate multiple explanations of a tricky topic, create more varied practice questions, and personalise feedback at scale. You would spend less time on admin and have more time for the parts of teaching that matter most: on the feedback loop of (i) understanding what your students actually know, (ii) noticing where they’re stuck, and (iii) designing experiences that help them move forward.

But, just like in the other examples, this creates new responsibilities that are easy to overlook.

When you use AI to generate lesson materials, you obviously need to check them. Does the explanation match what you actually want to teach? Are there subtle errors? Does it reflect the diversity of learners in your classroom? You cannot just paste AI output into your course and assume it works for everyone.

And when you use AI to grade or assess work, you need to think about fairness. AI systems are known to embed biases around language, what “good” work looks like, and even which students get the benefit of the doubt. If you are using an AI tool to evaluate student work, you need to understand what it is looking for and check its judgements against your own. Is it treating all students fairly? Are there patterns in what it marks down (or up)?

If and when you use AI to provide feedback, you are using it as a proxy for your own judgement. Students tend to take feedback seriously since it comes from you as a teacher – except, in this case, it came from an algorithm. If AI feedback is wrong or misleading, you can’t blame the model, you have to own that.

In practice, that means reviewing feedback before it reaches students, or be explicit that this is AI-generated and should be treated with that in mind.

There’s also the question of what students are learning about knowledge and labour when you use AI heavily in your teaching. If AI as the source of “answers,” they may miss the value of struggle with a subject and building understanding over time.

If they think essays are written by just prompting an AI and copy/pasting an output (I worry about this with my own kids), they may not develop the thinking that comes from writing. Your role includes helping them understand when AI is a useful tool and when it is a shortcut that skips something important.​

As in the other examples, there’s a vendor dependency question here, too. If you teach students using proprietary AI tools, you are teaching them to rely on those vendors. My family uses Perplexity quite a lot, and in fact I use the Pro version. If I was teaching, I couldn’t very well recommend a paid-for AI tool, so I’d be recommending open tools, or – even better – how to think critically about AI outputs regardless of the system. It’s a fine balance.

We know that teachers are over-worked. But there’s an important responsibility here in shaping how a generation thinks about knowledge work, and truth. When AI can generate pretty much anything, teaching students to verify, question, and think critically becomes more important than ever. So AI is useful to make your job faster and easier, but there’s a tradeoff that needs not just acknowledging, but addressing.

Some practical questions

A question mark painted on concrete
Photo by Stephen Harlan / Unsplash

These three examples show that the agency granted by AI generates a new kind of responsibility. You are no longer just a “user” of systems others designed but, to varying degrees, a designer and implementer of systems

Here are some practical questions no matter what your current role is. They’re worth reflecting on.

  1. Can you state clearly what the system needs to do? Vague instructions produce vague or brittle results, whether the builder is human or AI.
  2. Can you test whether the output works? You don’t need to be able to write code to run scenarios, try edge cases, and check whether results make sense.
  3. Can you judge basic risks? Who will have access to the data? How sensitive is it? What are the consequences if the system fails?
  4. Do you know what the tools you’ve chosen do with your data? Have you read enough of the terms or documentation to know if prompts are logged, stored, or used for training?
  5. Can you document what you built so that someone else can maintain it? Notes, diagrams, and a simple “how this works” page go a long way.

There are many other questions here, too. I’m not sharing these suggesting that you should avoid using new AI tools, but rather to help take a step back and make the difference between a short‑lived experiment and a system your organisation might be able to trust.

Final words

So where do we go from here? If you lead an organisation, you need to know which decisions you’re happy (if any) for AI-derived software to influence. If you work in a co-op, consultancy, or agency, you need to have honest conversations about maintenance and dependence with your clients. And if you teach, you need to think about what digital literacies you’re developing with your students when AI can an answer to any standard work you can set them.

Our future is shaped by people making decisions in roles like yours. These choices are hard work! They require thinking about governance, about systems, and what happens over the medium-term and long-term.

Code is becoming cheap. Care, it turns out, is not.

↑ Top

All content CC0 unless otherwise specified