characteristics of big data
characteristics of big data

Big data analytics is no more a new term, and almost everyone knows something about it. But does it mean the only large amount of data? Whenever we talk about big data, we take an interest in big data analytics as it is directly related to business. How does it serve the purpose of a business? Big data analytics provides the analysis report on data patterns reflecting market trends, consumer behavior, and many more. However, for big data analytics, many considerations come into the picture. These are generally termed as the characteristics of big data or v’s of big data. These, in other words, also define big data. Hence, from that point of view, the first question comes to our mind: What is big data?

Can we consider any structured, unstructured or semi-structured data as big data? Such data must meet certain characteristics. So, what all those characteristics? Generally, so far, we have considered 3V’s of big data: Velocity, Volume, and Variety as big data. In fact, this was identified as the essential characteristics or three dimensions of Big data by Gartner analyst Doug Laney.

However, many considerations come into the picture over the period, which has crossed this 3V concept. Interestingly, those had added more key factors which can answer what the characteristics of big data are? One of the main reasons may be advanced analytics and associated intelligent processes. This makes data scientists think over these additional characteristics.

But there are many conflicts on how many V’s of big data should be considered. Also, the characteristics of big data depend on it—some talk about the characteristics of big data 6v, others 4V and many more. However, here we will discuss 8 characteristics of big data or 8 V’s of big data. Let's see what characteristics of big data come under these 8V’s are.


Characteristics of Big data - the 8 V’s


1. Volume:

When we talk about Big data, probably volume is the very first criteria for consideration. The range of volume justifies whether it should be considered as ‘big’ or not. Usually, if the volume of data is above gigabytes, it is only considered big data from a volume perspective. What does measurement signify here? It could be petabytes, terabytes, exabytes. This volume amount is considered based on data surveys of different organizations, and here are some of the examples:

characteristics of big data

This is also the purpose of differentiating such an enormous size of data as Big data from traditional structured data. In addition to that, RDBMS, or traditional database systems are not efficient to process or handle this data. Because it will take extended query time, cost, reliability, etc.

Also, by 2020, business transactions on the internet for B2B and B2C will reach 450 billion per day as per IDC estimation.


2. Velocity:

Stream analytics is a popular term today where high-speed data is processed using tools. But do you know stream analytics associated with which characteristics of big data? No doubt, it is the velocity of data. Here velocity means data generation speed, how frequently it is delivered and analyzed.

Now, the amount of data generated in today’s scenario is massive. Most importantly, it needs real-time processing for analysis purposes. For example, Google alone generates more than 40k search queries per second. Hence, we can imagine how fast processing is required to get insights from data.

3. Variety:

Big data deals with any data format – structured, unstructured, semi-structured, or even very complex structured. So, storing and processing unformatted data through RDBMS is not easy. However, such unstructured data provides more valuable insights into the information we rarely get from structured data. Besides, a variety of data means different data sources. So, this characteristic of big data also provides information on the data sources.


4. Veracity:

Not that all data that come for processing are valuable. So, unless the data is cleansed correctly, it is not wise to store or process complete data. Especially when the volume is such massive, there comes this dimension of big data – veracity. These particular characteristics also help determine whether the data is coming from a reliable source or the right fit for the analytic model.


5. Variability:

In Big data analysis, data inconsistency is a common scenario that arises as the data is sourced from different sources. Besides, it contains different data types. Hence, to get meaningful data from that enormous amount of data, anomaly and outlier detection are essential. So, variability is considered as one of the characteristics of big data.


6. Value:

The primary interest for big data is probably for its business value. Perhaps this is the most crucial characteristic of big data. Because unless you get any business insights out of it, there is no meaning of other big data characteristics.


7. Visualization:

Big data processing is not the only means of getting a meaningful result out of it. Unless it is represented or visualizes in a meaningful way, there is no point in analyzing it. Hence, big data must be visualized with appropriate tools that serve different parameters to help data scientists or analysts understand it better.

However, plotting billions of data points is not an easy task. Furthermore, it associates different techniques like using treemaps, network diagrams, cone trees, etc.


8. Validity:

Validity has some similarities with veracity. As the meaning of the word suggests, the validity of big data means how correct the data is for its purpose. Interestingly a considerable portion of big data remains un-useful, which is considered as ‘dark data.' The remaining part of collected unstructured data is cleansed first for analysis.


Related post - 5 Big data analytics trends for 2019 that will influence artificial intelligence.


To conclude, the above mentioned 8 characteristics of big data indicate that each characteristic is associated with some advantages. However, they are not beyond challenges. Besides, these characteristics determine the root of failures or defects in data on a real-time basis. Also, analysis based on these characteristics feeds the risk portfolio of a company and helps prevent fraudulent activities.

Please share your valuable inputs in comment area to make the article more informative.

Leave a comment