In the 1980s, I taught a course in systems analysis and the classic approach was top-down: analysis, design, code, test, and install. Some refer to this as a waterfall approach. This is a general approach that can be modified by defining modules that are assigned to individual programmers or groups. This adds unit and then integration testing before release and any real feedback loop. I worked in one of those environments. Invariably, some (or many) analysis and design assumptions and predictions ended up being wrong. The further along that top-down process you are before you find problems, the bigger the impact. It could end up canceling the project with huge time, cost, and personnel consequences.
Is the goal then to spend more time on analysis so that you risk "analysis paralysis?" I consulted for a very large organization that fell into that trap. In most new projects, it's difficult or impossible to know what you're doing until you do it. The alternative approach is to use a bottom-up methodology: install a program stub with base functions using whatever data/file format you throw together. The key benefit is that you start the understanding and customer feedback loop immediately. However, prototyping, hacking, or throwing something together still runs a big risk of large code redos, and management tends to think they're further along than they really are. Clean-up and bullet proof refactoring take time, and many want to add in features on the fly. "This won't take long, I promise!" However, with a prototype approach, redo needs are usually found out sooner with fewer consequences.
While teaching the systems analysis course, I formalized and promoted something I called an "Outside-In" approach to systems development. You start with an initial top-down analysis and then define and construct general base components that can be used to build the system from the bottom up. A prototype (incremental release versions, actually) is then built in an iterative fashion with those base components. This provides a better framework than jumping in and hacking out anything without regard to long term consequences or how it ties into a world of common data and processing. Bottom-up development with good components minimizes the magnitude of redos.
When Brad Winslow and I created our integrated "Nautilus" system in the 1980s consisting of an increasing number of calculation and CAD design tools, we started by creating a common user interface front end, a common calculation module format, and database file and access routines for any design data. On top of that we built a design executive system to manage and update the individual calculation modules. These components minimzed change for an evolving system. The base components may have changed, but the structure of the system did not. Feedback was achieved early and large changes were minimized.
When people now use the term "agile", it's often not clear what they mean and whether it's mostly hype. Process can only change agility so much. ("The Agile movement is not anti-methodology...", and many formulations sure look more pedantic these days with excess baggage.) Everyone wants fast and flexible code development, but what are the assumptions and how is it better than prototyping using modules and fewer good programmers? Is it for a stand-alone system, or is there a need to integrate with other corporate or organizational systems? Is it for internal use, or is there an integration or tie-in need with other companies and markets, both horizontally and vertically? If you want to develop a Product Lifecycle Management (PLM) system, you need to interact with the outside world of CAD/CAM/CAE. Everything is more about multi-discipline world cooperation (coopetition) over the web. In the old days, many businesses kept their tools and processes private, but that's all changing with the cost savings of outsourcing, open source software, and SaaS. If you have a monolithic and complex internal process (agile or not), then outsourcing or working with the rest of the world becomes more difficult without clearly defined basic computing objects and interaction boundaries.
Agile can't be just about changing the management of a traditional systems development process. It also can't be just about redefining old prototype ideas. Top-down and bottom-up have existed for ages, and agility has to be more than just a process or organizational change. Unfortunately, many things can get lost when a good "Agile Manifesto" gets translated into reality. What's missing are independent and compatible base computer components that are meaningful to all in a cloud development world. These components are the key to providing true agility (flexibility, independence, smaller learning curves, and speed) to any systems development process.
At a VM level, Kubernetes and Docker exist to containerize and modularize systems that hide all data and methods, but what about common components at a lower level? We have OOP-based classes, but they live in internal program and language-dependent worlds of C++, Java, Python and others. Internal OOP does not live in an external web-based world that has to deal with file formats and the exchange of data, and it does not deal with the need to split data structures from methods for subject matter experts and industry stakeholders.
TSPA defines those missing external computer objects: Code Engines (CE), Variable data structures and a single XML file format (VXML), User Interface Frameworks (UIF), and Process Executive Systems (PEX). These external computer objects (XOOP or what I call "soup") can provide true software development agility in a web-based world of cooperating methods and data. TSPA objects can enhance any agile process (Scrum, Extreme Programming, Test Driven Development, and more), and provide a common web-based framework for all software developers, subject matter experts, users, and industries. It's a framework that scales from students to independent software developers to large corporations. One example of external development scaling is the Lego Mindstorms robot programming system that scales from student programming in fifth grade to high school, college, and then to professional tools. TSPA has shown this same effect using examples where high school and college students have added valuable tools to an existing professional TSPA Process Executive (PEX) system independently.
After 50+ years, software development has to evolve from a world where independent software developers and companies create and control everything to a world where everyone works together and a world where subject matter expert and industry stakeholders are more than simple users or customers. Now that the worlds of PLM, Digital Twin, IoT, CAD/CAM, and much more have to be integrated, internal and pedantic agile programming techniques that focus on process are not good enough. We need to have clearly-defined web-based computing objects that are cross-platform and cross programming language, and can be used to build all complex systems. Programmers also want to have more cross-company and cross-industry knowledge and skills that are not tied to any one internal software development process.
TSPA provides an open source framework of objects that has gone through three working revisions. These objects offer solutions that can't be created any other way. The future of programming is to "Split the App." It may not have seemed reasonable just a few years ago, but with the speed of computers, SSD storage, and the web, the split becomes invisible to the user. However, for all other stakeholders building complex world-connected web-based systems, the TSPA split defines a fundamentally new basis for both internal and external agile software development.