Back

Stubs

What are Stubs?

Stubs are placeholders of the original data after it has been migrated to secondary storage. They replace archived files in the location selected by the user during the archive process. Because stubs are proprietary and static, if a stub file is corrupted or deleted, the moved data becomes orphaned. This creates a risk of disruption to users, applications or data protection workflows.

Stubs are commonly used to make tiered data appear as if it still resides on primary storage. However, this approach introduces architectural limitations and operational risks, particularly at scale.

How do stubs work after data is migrated to secondary storage?

Stubs act as placeholders for the original files once data has been archived or moved to secondary storage. When users access stubbed data, the storage management system intercepts the request, retrieves the data from its secondary location, whether file, object, cloud, or tape and rehydrates it back to primary storage. While this makes tiered data appear to reside on primary storage, the transparency ends there. The retrieval and rehydration process adds latency and increases the risk of data loss or corruption.

What are the risks and challenges associated with using proprietary stubs?

Stubs are brittle. When stubbed data is moved from its storage location to another system, the stubs can break. Once broken, the storage management system no longer knows where the data resides and the data becomes orphaned, preventing access. Because stubs are proprietary and static, corruption or deletion of the stub file directly impacts access to the migrated data.

Most storage management solutions that use stubs rely on client-server architecture and do not scale effectively to support data at massive scale. This creates operational challenges in large environments where data movement and growth are continuous.

Why do proprietary stub-based approaches limit transparency and scalability?

Proprietary interfaces such as stubs are used to make tiered data appear as though it remains on primary storage, but true transparency is not achieved. The storage management system must intercept access requests and rehydrate data, which introduces additional processing, latency, and operational complexity.

Because stubs are not based on open standards, they create dependency on a specific system. This limits flexibility and increases the risk of disruption if the stub file is altered, deleted or if data is moved outside the control of the storage management platform.

How does standards-based transparent data tiering differ from stub-based methods?Komprise-Kumar-TMT-Deep-Dive-Blog-Part2-Thumb

A true transparent data tiering solution creates no disruption, and that is only achievable with a standards-based approach. Komprise Intelligent Data Management uses Transparent Move Technology™ (TMT), which relies on Dynamic Links based on industry-standard symbolic links instead of proprietary stubs.

By using standards-based symbolic links rather than static proprietary stubs, the risk of orphaned data and broken access paths is eliminated. This approach maintains transparency without introducing latency, disruption, or the brittleness associated with traditional stub-based systems.

Stubs are proprietary placeholders used after data is migrated to secondary storage, but they introduce risks such as orphaned data, latency, and limited scalability. Because stub-based systems rely on static and brittle mechanisms, they can disrupt users and applications when data is moved or modified. Standards-based transparent data tiering using symbolic links provides a more resilient and scalable alternative without the operational risks of proprietary stubs.

Komprise-Transparent-Move-Technology-White-Paper-SOCIAL-768x402

Learn more about the differences between stubs, symbolic links and Dynamic Links from Komprise.

Read the Komprise Architecture Overview white paper to learn more.

Want To Learn More?

Related Terms

Getting Started with Komprise: