data lake vs. data swamp

In the data storage, arena data lakes and data swamps are not new names. Both follow the same approaches for storing data compiling structured and unstructured data in one repository. In large enterprises, as they handle enormous amounts of data, they use both types of data storage – data lake and data swamp. As data lakes and data swamps are more scalable than structured storage they cost less. As they can hold unstructured data you can add data to the repository without needing a particular format. But there is a basic difference when we compare data lake vs. data swamp. And the difference lies in how data is curated.

The difference lies in how data is curated. What does this mean? For starters, a Data Lake describes where a vast amount of data of various types and structures can be ingested, stored, assessed, and analyzed. Specifically, Data Lakes make it easy for Data Scientists to mine and analyze data, to require minimal transformation if any, to facilitate automated pattern identification, and is a good online archive.

A Data Swamp, in contrast, has little organization or no system. Data Swamps have no curation, including little to no active management throughout the data life cycle and little to no contextual metadata and Data Governance. Data Swamps have the problem of being of little use or unusable and frustrating.

Related post – Why Data lake is a viable solution in Big data space?

Why Data Lake is important?

Flexibility and Faster analysis

As the data volume has increased at rapid growth, the data complexity has also increased at the same rate. Undoubtedly, it is a rising challenge for companies’ operational efficiency while analyzing those vast amounts of data within a given timeframe. Thus flexibility and scalability are the two essential parameters in this regard. It has been observed that if the data is analyzed in its native format, it supports flexibility and scalability. Though the analysis may not be accurate when compared to performing in a standard format, it retains user satisfaction and timeliness. No doubt, these are essential factors for any business.

Supports different forms of data analytics

A wide variety of data analytics can be performed which includes:

  • Dashboards
  • Real-time analytics
  • Visualizations
  • Machine learning
  • Big data processing

Helps to explore diverse data types

As Data Lake supports all data types in native format, companies can leverage these data’s potential even if it is a new format. Besides, it supports legacy systems to offload diverse data types.

Increased ROI

With the enhanced analytical ability, companies get better business agility. Better analytical activities translate to substantial ROI for the companies, which brings increased growth along with profit.

360-degree analysis

By implementing Data Lake, companies can get 360 degrees view of customers instead of a data silo structure. It makes the analysis more robust.

What does a data swamp do?

Data swamps are like data lakes and they begin as a lake. Enterprises don’t usually start with a data swamp, neither these are sold as-a-service or marketed. Enterprises turn data lakes into data swamps when there is no expectation or guidelines are set by businesses for their data storage. Swamps make analysis and retrieval very challenging.

Data swamps are a catch-all for data. When an organization needs to store data, and they don’t know how to categorize it or don’t need to put it in a warehouse, they use a data lake-turned-swamp. It collects all unrelated objects and files. Data swamps store unnecessary and outdated objects as users toss anything in them, whereas they don’t need to set any guidelines for relevance or timeliness.

Besides, administrators don’t manage or govern data swamps regularly. They don’t have controls or categorization placed on their stored objects. That’s part of the reason they don’t lend themselves to big data analytics. The other reason is their lack of metadata. Objects and files stored in swamps frequently don’t have metadata, which makes them incredibly challenging to search or organize.

Data swamps are also a danger to compliance. They obscure customer data, and if businesses can’t find data in the murky recesses of the swamp, they could be found non-compliant to regulatory standards that require data to be retrieved or deleted. Most regulations require businesses to keep strictly accurate records of data, including who has access to it, and data swamps make that difficult (or impossible).

Differences: Oversaturation vs. Organization

Oversaturation is a common problem for data swamps due to lots of data. However, both data Lakes and data swamps share the flexibility of NoSQL. As departments and various users offload non-curated data into the Data Lake, a series of “standing pools” of data emerge. These cesspools grow murky with an unknown number of data types that can’t readily integrate with each other to produce insights. As the data disconnection continues, with an increased number of projects and tools, data becomes lost and the Data Lake becomes abandoned and a headache. Too much data without the appropriate organization characterizes the Data Swamp.

Using data curation and governance many data swamps can be cleaned organize into data sets, but not to the point of over organization that results in bottlenecks.

Prioritization through Data Governance

Knowing what your priorities are is the key toward implementing an efficient structure for the Data Lake, through Data Governance. By understanding what the organization was trying to do with the Data Lake, the focal points and desired results, fuller implementation of a Data Lake works for the user system.

Curating Contextual Metadata

Data Curation is about “contextual metadata.” To that end, data assets themselves never need to be centralized, stored or accessed in a single repository, such as a Data Lake.  A Federated System with smaller units that can maintain Authority over smaller repositories. Here an Open Archival Information System (OASIS) is considered as a template.

The OASIS system chunks multiple data files and packages them logically and physically as an entity. Creating metadata about these chunks or containers of files, such as a primary schema, allows for effective Metadata Management of a particular business’ data sets without getting bogged down in systematic details and individual data files that potentially bottleneck a Data Lake. Every piece of data may not be known in these data subsets, but the contextual metadata about the set gives a Data Analyst enough information to know in general what is inside. While this means assuming some level of data saturation where exploring outside the business context may be difficult, the Data Lake retains scalability and flexibility advantages within the business context.

Leave a comment