Perspective, J Electr Eng Electron Technol Vol: 11 Issue: 9
Big Data in the Economy of Things and Literature Data Science
Received date: 18 August, 2022, Manuscript No. JEEET-22-54793;
Editor assigned date: 22 August, 2022; PreQC No. JEEET-22-54793 (PQ);
Reviewed date: 31 August, 2022, QC No. JEEET-22-54793;
Revised date: 16 September, 2022, Manuscript No. JEEET-22-54793 (R);
Published date: 29 September, 2022, DOI: 10.4172/jeeet.1000929.
Citation: Curran K (2022) Big Data in the Economy of Things and Literature Data Science. J Electr Eng Electron Technol 11:09
Keywords: Digitalization Technology
The economy of things and discuss the state of big data given the economy of things. Despite the championing of big data by the white literature, the refereed literature is still discontent of data science and big data analytics. In an effort to advance the big data technology in an economy of things, we propose a value-based taxonomy of big data and propose a framework to integrate big data engineering, data science and decision engineering. We also debate current big data analytics problems and claim that the big data V’s are at the origin of these problems and we propose anti-V’s to remedy for added complications. In this taxonomy, big data continues to produce non data facts. While a lot of these non-data facts are datafied to create grey data, and a lot of them are datafied to create dark data, there will also be a lot of non-datafiable non-data facts that will be discarded in dark holes. For a justifiable competitive advantage, businesses will process data that is born data, and grey data to produce a sufficient decision support power to achieve their strategic goals. While these companies may also, as needed, supplement their big data inductive analytics for testing or to add veracity, there are rare occasions where an aggressive strategy may need to process dark data to achieve a tactical interceptive position in the economy of things. The dark whole data cannot be datafied and unless stronger and feasible non-data analytics comes around, this type of non-data remains inaccessible. However, digging deeper in grey data is often a feasible activity to attempt an extensive search for decisional insights that can advance the organization’s business value generation capabilities. In contrast, accessing dark data may be an expensive alternative that is only advised to supplement or test big data inductive analytics.
We also discuss the mostly non-Bayesian decision engineering activities in an economy of things and propose Smets’ Transferred Belief Modeling, in Dempster and Shafer theory, that presents a mathematically sound approach to manage non-Bayesian uncertainty. The internet of things has changed the world and imposed the economy of things. Only few years old and big data continues to pour quintillions of data in this new economy. Unfortunately, big data came with a great deal of complexity through the many V’s that add velocity, volume, variety, and veracity complications. The literature reports multiple attempts to study the effect of digitization on business growth and is aware that business transformation must rely on the cloud, big data/analytics, social and business dynamics, and mobility. In this new economy, the volume of digital data has now exceeded the volume of analog data all around and members are sinking in very deep data and cannot find the big data analytics capable of giving them the decision support power they seek out. Members continue to extract and amass large volumes of costly data that change at high speeds while their ultimate goal should be instead to extract actionable decision support that lasts. The digitalization efforts attempted including statistics, data mining, expanded machine learning have all failed to generate the decision support capability that yields a stable economy, and one that survives the big data. An economy is, by definition, made of events where members can trade with confidence and with a known advantage.
This advantage promotes the rational member’s decision support, as in a Simon’s decision process. In a digital economy however, this decision support is the result of a member’s processing of all accrued intelligence, given an abundance of big data reports. With all the V’s branding these deep big data resources, a member’s decision support remains an unsure quest, may be, of the type of garbage in garbage out. Imagine what those V’s can do to any member in a digital economy: a diversity of noisy unstructured facts continue to crop up and overflow any costly storages a member can feasibly have, with a high speed, and with unknown veracities. Under these conditions, it is probably more suitable, to adopt a redefinition of the conceptual resources forming in big data as four types: noise, data, information, and knowledge. Noise, as in unstructured data, is termed as raw facts with an unknown code system. Data are, instead, raw facts with a known code system. Information is, however, defined as a network of data facts that generate apprehension and a Bayesian update a surprise. Knowledge is inference that can be tested and validated as principles or rules of thumb. Big data, in an economy of things, is populated by instances of those conceptual resources. We later in this article propose a new value-driven taxonomy, using those conceptual resources. Members who may be individuals, small businesses, or midsize or large companies, process those resources to produce the decision support power they need to collect the event’s gain associated with any executed trade in this economy. In a digital environment, we view the digital economy in terms of two components: digitization and digitalization. The white literature is creating great confusion in distinguishing between digitization and digitalization.
Digitization is the converting of analog contents into a digital format. As in digitalization is a step beyond digitization to process digitization outputs to generate new business value generation capabilities. Digitalization is concerned with the analysis of data poured by the internet of things in our economy of things. Organizations have the opportunity to acquire the necessary digitalization power to transform these volumes of data into business value. Members in the economy of things continue investing in big data on a grand scale. They continue hadooping without any planning of sound data analytics. The produced decision support, using digitalization technology, remains unproven. There is just too much change in the data generated by big data and what members extract now may not be the same a minute later. Massive volumes are produced, to an unimaginable rate. Every minute, about 2 quadrillions of bytes of data are created; that is, more than two quintillions of bytes are created in one day. Despite these unbound volumes, members still desire to have more. A rational member of this economy can no longer continue without a finite plan devised to produce efficient decision support with a Bayesian update. The volume concept attached to big data continues to be a paradox until the member’s objective of a suitable extraction can be achieved. The same may be said for the velocity concept in big data. A member of the economy of things continues to invest in high speed data, yet none of the members knows how to coordinate their digitalization technology to achieve the decision support production rate needed in the economy. Despite a high velocity of more than 50,000 GB per second through the global internet traffic, members are still asking for more. This paradox will continue until the objective of a synchronized decision support rate can be achieved. The third V in big data is the coerced variety concept which meant the diversity of big data facts in terms of noise, data, information, and knowledge. As much as 80% of big data facts come unstructured and without known code systems. Members may exchange tweets, photos, videos, documents, etc. More than 80% of data growth is in videos, images and documents. Most of these facts are produced with an unknown code system and come in a high speed. Still, members desire to get more by investing more in big data. The continuous acquisition of big data despite the coerced variety concept remains a great paradox that can only be resolved if the big data is filtered at a matched speed to collect only those facts with known code systems. It will be very difficult to produce a Bayesian update from unknown code systems. Big data is also tainted with a greater paradox associated with the veracity concept. It is estimated that, in this economy of things, more than 30% of business leaders do not trust the decisional information they rely on to make decisions. It is also estimated that poor data costs the economy more than three trillion dollars per year. Despite all types of ambiguities, inconsistencies, uncertainties that hinder the veracity of big data, members of this economy are still seeking more at a big scale. There may be other V’s added to big data, but as long as a feasible Bayesian update cannot be achieved, none of these V’s can come without a paradox. With the internet of things, the world is all connected and all digitalized and all the events making the digital economy pour their facts in big data logs and repositories. These continual currents of indigestible facts do not bring along any feasible quality analytics to overcome the speed and size of reception. Big data emerged just too much earlier than the tools needed to tackle it.