Sustaining Direction Dynamically: Designing governance tempo, portfolio and insight management for future and change capability in higher education

This is the sixth article in a series on future and change capability in higher education, and the second of two posts on strategy, governance and risk. This post is a long one, because it’s where I get into how institutions can become more agile by making good decisions based on getting the right people with the right authority to move at the right speed with the right information.

The premise of the future and change capable series is that we are in an environment of sustained uncertainty – AI disruption, funding volatility, shifting student demand, policy instability and declining popularity – and universities need more than capable leaders and good intentions to navigate it all effectively. Along with external factors like enabling policy environments, the institutions themselves need future and change capability. This is the designed capability to decide deliberately in the face of change, act coherently and learn from what they do.

The previous post added a third element to the framework developed in this series: decision architecture, which is the structural layer through which strategic commitments are formed and risk is treated as a strategic judgement rather than a compliance activity. That argument built on two foundations developed earlier in this series. Identity is the anchor that tells an institution what matters; permeability is the deliberate design of channels through which insight enters and circulates.

If you need a recap, all of the posts are here in order:

post 1: why future and change capability in higher education?

post 2: identity before adaptability

post 3: university permeability and adaptive ecosystems

post 4: midway point reflection and why is Ruth exploring all this?

post 5: decision architecture

And post 6 is this one – governing decisions and commitments dynamically over time.

Before diving in, I want to flag that what I am developing in this post (and elsewhere in this series) is grounded in organisational learning, strategy and governance theory and research, and in my own experience as an institutional leader. But systematic empirical evidence of how these governance practices could and do function in universities specifically is limited, and universities have structures, policy contexts and aims that are very distinct from both the public and corporate sectors. This blog series is building towards empirical research that will refine the ideas I’m proposing and test them against others’ experiences and practices. At this stage I offer these ideas as reasoned propositions for testing, not established findings.

You’ll be happy to know that the core argument in this post isn’t that universities need more bureaucracy and governance machinery. I think most already have too much process applied in too generic a manner. The risk management and quality assurance intentions are good, but the execution isn’t always optimal. The problem is that the wrong kind of deliberation can be applied to decisions, and that sometimes important decisions are not always made consciously.

Poorly calibrated governance, in which decisions of very different kinds move at roughly the same pace through similar structures regardless of what is at stake, is itself a source of institutional inertia. What future and change capability requires is purposefully calibrated governance, with less undifferentiated or unconscious deliberation and more deliberation of the right kind, surfacing and dealing with each decision at hand in a way that balances consideration with speed. This post discusses three connected areas that shape whether this is achievable: how institutions differentiate the tempo of different decisions; how they manage and conclude strategic portfolios; and how insights generated through permeability, including evaluation insights, reach the fora with authority and capability to act on them.

Governing decision speed

Future and change capability requires governance calibrated to what each decision actually needs. Universities can move quickly when circumstances require it, but speed without adequate deliberation, the right information or the right people in the room does not always produce good outcomes. Often, when decisions are made rapidly or reactively, there isn’t time to assemble the ingredients good decisions require. These include the right people with the right authority, the right data and analysis, and clear pathways for enactment. Deliberate pre-design of governance pathways for different kinds of decisions will allow institutions to respond well, rather than having to improvise or default to an existing pathway that doesn’t fit. We can and have done this well previously. For instance, many universities set up differential decision speed governance systems during COVID, and these worked well. Once the crisis was over, we mostly returned to the single-speed governance status quo.

Universities commonly apply similar governance processes to decisions that differ considerably in what they require. A proposal needing swift operational action may sit in the same committee queue as a long-term strategic commitment requiring deep deliberation. The speed at which each moves is shaped as much by funding cycles, political momentum and institutional sponsorship as by the nature of the decision itself. Necessarily, governance processes tend to be structured around annual calendars, committee schedules and funding timelines. Unfortunately these rarely align neatly with each other (how many of us have committed to an operational plan without the associated annual budget approved?) let alone with what individual decisions actually require. The result is a system that is simultaneously too slow and deep for some decisions, and too fast and shallow for others.

The organisational ambidexterity literature has long recognised that institutions that simultaneously pursue the exploration of new initiatives and the consolidation of established activities need different conditions for each. One way to approach this is to create distinct organisational units for these different purposes. Another is to cycle through periods of exploration and periods of embedding and consolidation. Neither translates straightforwardly to the university context. Unlike commercial organisations that can more cleanly separate exploratory from consolidating activities, universities are deeply integrated: different types of decisions and activities draw on the same academic and professional staff, the same infrastructure, and the same resourcing, and are governed by the same bodies.

A major review of UK higher education governance, conducted by Advance HE in partnership with the Committee of University Chairs and the Association of Heads of University Administration (2025), noted the tension between the pace that effective transformation requires and proper oversight obligations. Drawing on wide-ranging engagement with governors, chairs, institutional leaders and board secretaries, the review called for more deliberate differentiation between what warrants extended deliberation at board level and what can be resolved more quickly with appropriate authority, recognising that when everything receives the same level of scrutiny regardless of its strategic significance, governing bodies spend less time on the decisions that most warrant their attention.

Temporal architecture can address this misalignment through the deliberate design of different decision pathways, with different routes, authority structures and tempos for decisions that differ in their stakes, reversibility and legitimacy requirements. Mission-level and long-horizon commitments, about research concentration, pedagogic direction or the institution’s relationship with particular communities, need deep deliberation, broad consultation and extended timeframes proportionate to what is being decided. Portfolio adjustments need evidence-based, time-bounded consultation with clear decision rights and authority to conclude. Bounded pilots and experiments need delegated authority and built-in review triggers, with explicit assumptions and predetermined signals for continuation or adaptation. Designing these different pathways reduces overall deliberative burden. Decisions that warrant speed can move more quickly, and those that warrant depth can receive it.

Two governance challenges are likely to arise in any attempt to differentiate decision tempos. The first concerns escalation. When a bounded experiment succeeds and warrants serious institutional investment, scaling typically requires a different kind of decision from the one that approved the original pilot: different authority, a different funding stream and often an institutional case that the pilot phase was never required to build. Without designed escalation pathways, successful pilots can stall at this transition. Temporal differentiation must therefore address how decisions move between modes, not just how each mode is structured.

The second concerns classification. Whether something is treated as a mission-level commitment or a bounded experiment is not a neutral determination in a university. It involves questions about where legitimate authority lies and who is responsible for making the classification when that is unclear. In practice, establishing this requires deliberate discussion at executive level about which governance pathway applies to which kinds of decisions, and who holds responsibility for making that determination, ideally before, rather than during, the decision process itself. And the more bounded experiments a well-designed fast lane generates, the more important it becomes to manage what happens to them over time.

The capability to stop

Without deliberate management, a portfolio of initiatives can develop inefficiencies and redundancies, become misaligned with current mission and priorities, and generate sub-optimal outcomes. Individual initiatives may have been examined at launch or reviewed in isolation, but systematic review of the portfolio as a whole is often absent. Future and change capability requires the ability to adjust as circumstances change, and that adjustment comes through deliberate portfolio management.

Portfolio discipline is the active, ongoing management of an institution’s portfolio of initiatives and activities. It involves the deliberate decisions about what to continue, what to adapt, what to scale and what to pause or stop, made against explicit criteria rather than by default or inertia. It is the counterpart to the capability to start things, and in many universities it is considerably less developed.

Temporal differentiation, if it functions well, generates more bounded experiments, proofs of concept, prototypes, and pilots. Without portfolio discipline to complement it, it also generates more unexamined legacy. Each pilot, initiative and project that moves through the fast lane creates a potential new commitment. Without designed mechanisms for rapid evaluation and conclusion, the portfolio can grow through addition rather than managed choice. Alternatively, it can leave a graveyard of promising initiatives that were never properly examined, or that showed real potential but never found the resourcing or decision authority to advance further.

Portfolio accretion is a pervasive feature of university life. Each initiative made sense when it was launched. Collectively, accumulated initiatives create administrative complexity and progressively narrow the capacity for adaptation. The resourcing challenge this creates is not usually a straightforward competition between legacy and new. In my experience, legacy initiatives are typically funded through operational budgets and are driven by operational staff and operational KPIs; new initiatives may be associated with strategic / project KPIs and different funding sources (but not always). Because the two streams may not come into direct competition, the tension between them tends to stay below the surface. Staff sustain ongoing commitments while also driving new priorities, and the organisational area may attempt to absorb both simultaneously rather than to evaluate what might slow, pause or stop. When something does eventually conclude, it can be the initiative with least visibility or advocacy, regardless of its actual mission alignment or return.

Empirical research illuminates how universities actually make decisions about program closure, and how these conditions can play out in practice. Eckel’s well-known empirical study of program discontinuation at four US research universities in 2002 found that the determining factors were generally not performance evidence or mission alignment. Programs that were closed tended to be those with fewer institutional supporters and limited capacity to mount a defence during the review process, regardless of the criteria formally developed to guide it. Alex Usher’s commentary in 2025 suggests that Eckel’s research is still relevant, noting that degree closure decisions continue to be shaped by a combination of implicit criteria such as prestige/reputation and sponsorship. My take on this is that implicit criteria may well have some validity, but we need to have the courage to make these implicit criteria explicit, to make mission and impact central, and to make unpopular decisions if needed. Institutions need to define exit criteria in advance and design the process for making stopping decisions. Without those conditions, stopping decisions are less likely to align with what is truly important to an institution, and may be deferred or at least take longer to make.

I’d argue that stopping is a design problem. It requires agreed and shared criteria established at the point of commitment that define what would warrant continuation, adaptation or conclusion; regular portfolio review against those criteria; and reallocation mechanisms that direct freed resource toward current priorities rather than baseline absorption. Dickeson’s program prioritisation model is the most widely used practitioner framework for this in higher education. Its limitation is that it centres on metric-driven ranking (enrolments, cost per student, financial contribution) rather than mission or strategy-anchored judgement, or less tangible criteria such as reputation or public good. A complete assessment also requires weighing the costs and benefits of continuing against alternatives, including what the same resources could do if differently directed.

The distinction Argyris and Schon draw between single-loop and double-loop learning is useful here. Stopping an initiative well means questioning whether its founding assumptions were correct, not just whether it met its targets. Without predefined criteria, institutions can only adjust at the operational level, modifying delivery, adjusting timelines, tweaking scope, without asking whether the initiative should exist in its current form. Criteria established at the outset make that question structurally available when review comes around, helping move it from the domain of assumption and political negotiation to the domain of considered judgement.

Time-limiting pilots and programs by default creates a structural trigger for deliberate review, ensuring that continuation is a considered decision rather than the default outcome of inertia. These review triggers need to be calibrated to what is being reviewed. Activities that require years to develop and show impact warrant longer cycles; bounded experiments warrant shorter ones. Responsibility for setting and conducting these reviews needs to be clearly assigned. In many institutions this sits most naturally at executive or Provost level, with academic governance consulted on quality dimensions but not holding sole authority over continuation.

The reallocation mechanism is as important as the stopping decision. Even when institutions agree that something should conclude, freed resource does not always reach new priorities. It can be absorbed into operational costs or directed toward deficit reduction rather than strategic reinvestment. Again, this could be a helpful thing, but the decision needs to be conscious and a mechanism needs to be created for it.

Many pilots are conceived with eventual scaling in mind, but the infrastructure to make it possible is frequently less developed, with no designed pathway from trial to sustained investment. Anyone in higher education who has encountered either of the phrases ‘everything here is a project’ or ‘everything here is a pilot’ will know what I mean. Project staff are focused on producing deliverables within the funding period, not on building the case or the infrastructure for continuation beyond it. When project funding concludes, the resourcing to sustain or generalise what worked can be unavailable, and the governance mechanism to authorise and fund ongoing investment may be unclear or poorly aligned. The capability to scale requires not only criteria for what would justify moving from experiment to commitment, but decision authority capable of acting on that judgement, and planned resourcing pathways to do so.

For portfolio discipline to work, it should connect to identity throughout. Mission provides the principled basis for decisions about what to continue and what to relinquish, distinct from decisions driven by financial pressure or the advocacy of those arguing for particular initiatives at the time.

Insight pathways, feedback loops and institutional learning

Future and change capability depends on institutions learning from experience and adjusting over time, which requires insight from evaluation, data and practice to reach the people with authority to act on it. The third article in this series addressed how permeability enables insight to enter and circulate. This section addresses what happens to that insight once it exists. I think it’s fair to say that in practice, insight does not always flow naturally to the places where it can influence strategic direction.

Even where permeability has worked well and insight has been carefully gathered and analysed, the journey from insight to institutional decision can have more stumbling blocks than is often acknowledged or addressed. Evaluation may be designed primarily around compliance or outputs rather than outcomes and institutional learning. Evaluation design shapes what questions get asked and whose perspectives are sought. Insights may not reach the people with authority to act on them in a form or at a time that enables considered response. Or it may reach the right people through the wrong forum, such as a governance body with quality assurance responsibilities rather than one with authority over strategy and resourcing. None of this reflects a lack of good intent; it reflects the absence of up-front, deliberate design around how insight is created, routed, aggregated, translated, and acted upon across institutional boundaries.

Weick’s sensemaking framework is useful here. According to Weick, organisations actively construct meaning from information, shaped by existing commitments, identities and contexts. Degn’s application of the framework to higher education strategy shows how leaders simultaneously make sense of changing circumstances and actively shape how insight is interpreted across the institution. Insight pathways are therefore interpretive forums as much as information channels. Who has the standing to frame what reaches decision authority is a political question as much as a design one. Whether uncomfortable intelligence surfaces or gets managed away depends not only on how pathways are structured, but on the trust and psychological safety conditions the next post in this series (culture and capability) examines.

The sensemaking and organisational learning literature suggests that effective insight pathways tend to share four features. First, decision-makers need structured exposure to evidence through collective interpretation in light of institutional purpose, not just dashboard reporting. Second, the connection between evidence and decision is traceable, strengthening accountability and institutional memory. Third, decision outcomes feed back into what data is subsequently collected and what questions future evaluations ask, so that learning from one cycle shapes the design of the next. Finally, different forms of intelligence (quantitative data, qualitative feedback, experimental findings and professional judgement) are considered together rather than routed to separate forums.

A further challenge to designing effective insight pathways concerns synthesis and timing. Useful intelligence often exists across different parts of an institution (in faculties, research offices, survey and data units, student services and planning teams), but in dispersed and incompatible forms. Bringing it together in a way that informs strategic deliberation requires effort that is rarely assigned as an explicit institutional responsibility. In fast-moving environments the timing challenge compounds this. By the time insight has been gathered, synthesised and reached deliberation, it may already be outdated, which is a particular risk in areas like digital and AI disruption, international student market shifts or rapid policy change. Feedback cycles therefore need to be calibrated to the tempo of the decisions they serve and the intel they provide, which connects directly to the temporal architecture argument developed earlier in this post.

Most institutional feedback mechanisms are single-loop, reporting whether initiatives met their targets and prompting operational adjustment within existing assumptions. Double-loop feedback reaches strategic deliberation and asks whether the targets were appropriate, whether the founding assumptions held, whether the direction warrants revision. What we know from learning analytics research, and from examples like Arizona State University’s integration of student data with curriculum redesign and support, is that structured feedback between data and institutional practice can improve outcomes at the operational level. Whether universities have built equivalent feedback loops at the level of strategic governance is much less clear, and is part of what the empirical work this series is building toward will need to examine.

Sustaining direction

Temporal architecture, portfolio discipline and insight pathways with feedback loops are complementary and connected. The ‘fast’ track of decision-making depends on insight pathways to generate the evidence that informs continuation or adaptation; portfolio discipline depends on double-loop feedback to make principled recalibration possible. Temporal architecture shapes how quickly that feedback can reach deliberation. Each constrains and enables the others.

What this governance cycle makes possible, when it functions well, is what I described in the midpoint reflection in this series as institutional learning and recalibration. This is the institutional capability to connect experience, evidence and judgement over time, and it is, I’d argue, a large part of what future and change capability looks like in practice: a continuously renewed capacity to understand, adjust and act with purpose.

There is an elephant in the room that needs naming. The mechanisms described across this post (differentiated decision pathways, portfolio review cycles, evaluation frameworks, insight aggregation functions) do of course add their own governance overhead. Applied generically or without sufficient care, they risk generating exactly the administrivia they are designed to replace. My suggestion is that the same principle of calibration that applies to institutional decisions should apply to the governance mechanisms themselves. In the spirit of double-loop learning, this is a meta-level application of the framework’s own logic. The mechanisms must be proportionate to what is being reviewed, differentiated by stakes, and periodically evaluated for whether they are adding value or merely adding burden, and adjusted accordingly. Getting that balance right is not at all straightforward, and is part of what the empirical work this series is building toward will need to examine.

All of this depends on culture and capability, the organisational conditions that enable or undermine these structures in practice. None of these (whether dissent reaches insight pathways, whether stopping decisions can be made without prohibitive political cost, whether feedback revises strategic direction rather than confirming prior commitments) are questions that structural design can settle by itself. How institutions develop the culture and capability to use these structures well is where the series turns next.

Facebooktwitterredditpinterestlinkedinmail