AI,  Instructional Design

Two Questions That Made Me Reflect on Instructional Design, AI, and Speed

A woman designer is reflecting pensively at her desk
In the quiet space before decisions, good design takes shape. Image generated by Envato ImageGen.

Question 1:

“Imagine this scenario, Yin: there’s no existing instructional design (ID) team. But you and a few others who would be hired shortly need to convert 250 15-week graduate courses into 7-week courses. You have about three to four months to do so. How would you use AI apps such as Make to accelerate the work?”

This question is intriguing but it also reveals several underlying assumptions:

  • First, compressing a long course into a shorter one is not a content reduction exercise. It is a learning redesign challenge.

  • Second, the solution has been decided prior to an instructional designer’s performance gap analysis. The assumption is that the ID’s role here is to execute the solution.

  • Third, AI may and can accelerate the work, but it can only do so much. When AI is expected to accelerate everything, the result is not faster progress. In fact, it is scaling misalignment.

  • Fourth, the organizational environment is not set up for this timeline. There is no existing ID team, yet a new one is expected to deliver at scale almost immediately.

This scenario reveals a misunderstanding of what instructional design is and can do. We systematically identify and solve performance problems; training solutions and courses are just two possible solutions. In instructional design, how we define the work often matters more than how quickly we try to complete it.

The redesign for this scenario entails a systematic evaluation of cognitive load, sequencing, practice opportunities, spacing, retention, and assessment structure. While AI may be used to help with drafting a new syllabus, creating discussion prompts, and accelerating media production, it cannot make design judgments, define learning strategy, or understand context deeply. If used at all, the bottleneck in this project will be the review pipeline. That is, if the leadership is serious about introducing a human in the loop to check through AI outputs. Even with AI assistance, this pace is unrealistic for quality work.

The correct sequence for this scenario is: define the performance or instructional problem, define quality standards, design the workflow, then select tools. Don’t start with the tool.

Speed is valuable but only when we start right and move in the right direction. You can remove content quickly but you cannot remove complexity (of all kinds, e.g., learning, change management, and more) without consequences. AI is not a substitute for design capacity. It will actually amplify the impact of existing decisions.

This leads to a second question: Where should instructional design sit in an organization? 

Many of the tensions in this scenario come down to where instructional design is positioned.

If ID is there to execute a solution only, then it has been positioned downstream, and that is too late in the workflow.

If it is allowed to shape and identify the problem itself, then it has rightly been positioned upstream, where it can make a bigger impact.

Let me sum up by putting it all together:

  • ID is not just execution; it defines the problem
  • Compression ≠ content reduction
  • AI accelerates outputs, not judgment
  • Strategy without design thinking creates misalignment
  • You cannot scale what you don’t yet understand
  • Speed without clarity amplifies problems
  • Where ID sits determines its impact

My ID prof used to say, “Organizations always want it fast, good, and cheap.” In instructional design, speed does matter, but only after we’re clear on what we’re trying to solve.

2 Comments

  • TB

    Yin, having started my ID career doing corporate design on the Individual and Organizational Effectiveness team at a higher education management company and then for a global professional services firm, then moving into higher education course design for a stint before returning to corporate ID and faculty roles, that arc shapes how I see this conversation.

    I’ve long believed that what ID is called in academia for academic courses should carry a different name from what we do on the corporate side. It’s also why I look at IDs who have come solely from academic contexts through a different lens, not as a judgment of individuals, but as a recognition that the structural conditions of academic ID work shape practice in ways that have little to do with training or competence. Faculty are the actual course producers. IDs often come in downstream. The timeline rarely allows for the kind of systematic process you’re describing, and that’s a feature of the environment.

    This is also why I sometimes use the term instructional systems development or instructional systems design to distinguish more structured, rigorous approaches from what often gets called ID in practice. If you have studied ISD at that level, often at the graduate level for those who pursued it, I can see why this scenario would be frustrating. You’ll rarely get to fully use that part of your training, even when it’s exactly what the work demands.
    Your critique is sound and the principles you’ve referenced are ones the field broadly accepts. Where I’d add a layer is that the scenario you’re describing isn’t unusual. It is, in fact, normal in many corporate and large-scale environments. Practitioners in those contexts have developed different workflows because the ADDIE-style ideal rarely survives contact with organizational timelines, governance structures, and the reality that the solution is often decided before the ID is ever in the room.

    I work in a constrained, regulated environment. That’s the lens I return to consistently in my writing and practice. What does good design look like when the ideal conditions don’t exist? When the timeline is compressed, when you’re downstream, when leadership has already committed to a direction? The answer doesn’t involve abandoning rigor, but includes applying it strategically within the constraints you actually have, not the ones you wish you had.

    Your post makes the right argument for an ideal setting. What I’d push on is how we translate those principles into constrained ones given that’s where most practitioners are actually working.

  • Yin Wah Kreher

    Treca, this is such a thoughtful take. I really appreciate the lens you bring, especially around constrained environments and downstream roles. That’s very real!

    I actually don’t disagree with you that what I described happens all the time, :). In fact, that’s partly what prompted my reflection. What I’m highlighting isn’t the existence of those constraints, but to show how quickly they become normalized as “that’s just how it works.”

    When we are operating as the small ‘d’ (vs big ‘D’ in systems design), it’s common to work within a system that structurally limits the impact of the work. It is indeed fairly common to accept the solution that has already been decided. In what I refer to as HPT or HPI (human performance technology or human performance improvement) work, the big D would typically get to do front end evaluation (analysis), FEA, work. What happens then to small “d”s in constrained environments, such as in this scenario?

    I see the role a bit differently; perhaps the small d can not only apply rigor within constraints, but also make those constraints visible when they undermine outcomes. Even small shifts upstream, such as graciously and constructively questioning assumptions, reframing scope, using AI to accelerate analysis rather than just production, can change what’s possible.

    For example, in this scenario, the small d could explore what is driving the ambitious and high-risk conversion, and suggest converting these hundreds of courses in phases instead of all at once.

    So for me, it’s less about ideal vs. real, and more about learning to offer input judiciously. The feedback may be accepted or not, but if the ID has the insight, why not share it when it’s appropriate? Of course, they need to read the room carefully. In some cases, it’s just not the environment where such input/feedback is accepted.

    So the real ID work becomes, where do we accept constraints, and where do we push back on them graciously, given our training and experience?

    That tension is hard, but it’s also where the work becomes meaningful. It’s how highly trained IDs can contribute their expertise in ways that still move the needle, even within constraints.

    Thanks for the insight!

Leave a Reply

Your email address will not be published. Required fields are marked *