AI adoption and the reshaping of early-career capability – a shared challenge for higher education and industry

Last week I delivered the opening keynote at the Graduate Employability: Insights to Impact Forum hosted by Swinburne on how generative and agentic AI are reshaping early-career work, and how industry, universities and students can be better prepared for it. I’ve had a few days to reflect, and I’m ready to share some thoughts.

The dominant media-driven narratives remain polarised. Either AI will eliminate graduate jobs (“the graduate jobpocalypse!”, “a robot stole my internship!”), or the AI bubble will burst, forcing a dramatic correction, a decline in organisational use, and the re-hiring of all the graduates we replaced with robots.

Clear-eyed analysis of evidence emerging from my employer survey research in Australia, international workforce analyses out of Stanford and Harvard, and recent graduate outcomes data suggests a far more complex picture — one that requires a nuanced and collaborative response.

In the United States, entry-level roles in many professional areas are in decline, partly attributable to AI adoption. In Australia, we can see the beginning of possible declines in graduate opportunities in bellwether disciplines like IT and law. However, my research with Australian employers suggests AI adoption is uneven. Its impact varies by sector, regulatory context, organisational size, leadership preference and task profile. Across these differences, however, some structural shifts are becoming clearer.

Routine, programmable tasks are declining. Expectations around judgement, oversight, integration and complex human interaction are rising — often far earlier in career trajectories than before. Entry-level roles are becoming broader, more cognitively demanding and less scaffolded. For instance, entry-level supply chain and logistics professionals are being asked to problem-solve anomalies and exceptions rather than beginning their careers with foundational data updating, reporting and compliance tracking.

Entry-level workforce shifts are happening, but I’m not seeing a jobpocalypse.

Many of us will have heard stories of organisations keen to take advantage of perceived AI efficiencies, moving quickly to adopt AI and reducing graduate hiring. Now, with greater AI maturity and clearer insights into organisational capability, public reporting suggests IBM and others are making more room for entry-level roles again.

For decades, many organisations have relied on pyramid-shaped workforce models, where early-career roles were places to perform straightforward tasks and also provided protected space in which tacit knowledge and professional judgement could be formed. When AI is introduced primarily as a tool for automation, without redesigning those developmental pathways, capability formation becomes compressed. We begin to see the emergence of more “diamond-shaped” structures, with fewer entry points and heavier reliance on mid-career expertise.

This approach may improve short-term efficiency (though, as I’ll share below, mounting evidence suggests otherwise). It certainly does not support long-term capability sustainability, and many organisations are now recognising this.

A very recent workflow study from Stanford examined software development workflows and tasks performed by AI and people. It found that full automation — for example, “write the code for this app” or “write the analytical report for Y” — often introduces significant inefficiencies through human checking and rework at the end. By contrast, where humans retain oversight and use AI to augment defined components of work within a workflow (staying in control and delegating programmable sub-steps such as data cleaning or boilerplate code), productivity gains are much stronger.

The study found that the automation approach resulted in an 18% productivity decline, while the augmentation approach resulted in a 24% productivity increase.

For educators, these findings shift the conversation beyond AI literacy.

Graduates do need technical capability and AI literacy (“how to get AI to do what I need and ensure the output is high quality”). More critically, however, they require AI fluency: the capacity to use AI for augmentation — knowing when to use it and when not to — exercising domain judgement, interrogating outputs, understanding limitations, integrating ethical reasoning, and achieving the best overall outcome possible.

As one industry panellist at the forum observed:

“AI isn’t replacing human judgement. It’s making the absence of judgement more visible.”

The question for universities is how we integrate these fluencies into our courses, even as industry is concurrently working out how best to augment rather than automate with AI — and as AI capability itself continues to advance rapidly. This is a strong signal that we need to move away from legacy, slow-moving “curriculum in boxes” towards more advanced forms of authentic learning and teaching. Further, we need to go beyond episodic industry engagement to deep, reciprocally beneficial partnerships, collaborating to redesign the way professional capability is developed and talent pipelines are formed.

We can’t get away with tinkering at the edges of curriculum. This is a deep design challenge to which higher education and industry need to commit.

Facebooktwitterredditpinterestlinkedinmail