Before companies can profit from big data, they often must deal with bad data. There may indeed be gold in the mountains of information that firms collect today, but there also are stores of contaminated or “noisy” data. In large organizations, especially financial institutions, data often suffer from mislabeling, omissions, and other inaccuracies. In firms that have undergone mergers or acquisitions, the problem is usually worse.
Contaminated data is a fact of life in statistics and econometrics. It is tempting to ignore or throw out bad data, or to assume that it can be “fixed” (or even identified) somehow. In general, this is not the case.
Talk of the cloud has stirred up a lot of excitement. A 2011 mandate from the government CIO to move toward a cloud-first strategy has managers hoping that public cloud solutions will provide a quick fix to sticky technology challenges and messy business processes.
To some extent the hype is real. The cloud is transforming how organizations use and manage technology. As cloud adoption becomes more prevalent, government organizations that resist its charms risk missing opportunities to enhance the services they deliver.
I’d heard a lot about Silicon Valley, but had lived and worked in Europe and Asia until I came to MIT Sloan School of Management. Passionate about bringing new technologies to market, I wanted to do an MBA program in the U.S. because, more than anywhere else, this is where taking risk is valued as a driver of change. That seems to be especially true in Silicon Valley, and I was eager to see it for myself.
Organizing our Technology Club’s annual Tech Trek to Silicon Valley, I planned visits to a mixture of hardware and software companies. I also requested that we meet with people from different functions, including product management, which is an area many MIT Sloan students are interested in these days.
Peter Weill and Stephanie Woerner (Image credit: Harvard Business Review)
From Harvard Business Review
What do the following items have in common: credit cards and streaming or recorded music, robots for production, CAD systems, telephone networks, digital games, computers in products like cars and vacuum cleaners, sensors, and video consoles used in remote mining? Answer: They are all digital and connectable.
This is the world of total digitization: a multitude of digital devices and sensors creating streams of data, as well as any number of digital services and products for both internal and external use, distributed throughout the enterprise, and sometimes, but not always, connected. As the drive toward increased digitization continues, enterprises have to get a handle on this total digitization — and corporate CIOs have to step up to the challenge.
Within legal circles, the mystery of “Whodunnit?” has increasingly become “Who wrote it?” as courts, including the U.S. Supreme Court, keep issuing opinions without divulging who actually authored them. Since 2005, for example, the Roberts Court has disposed of at least 65 cases through unsigned per curiam opinions. Many cases also came with unsigned concurring or dissenting opinions.
We place a high value on transparency in our democracy, and that should certainly apply to Supreme Court justices, who, after all, are already protected by lifetime tenure. Obscuring authorship removes the sense of judicial accountability, making it harder for experts and the public alike to understand how important issues were resolved and the reasoning that led to these decisions, especially in controversial cases. We’ve all heard the charge that judges are legislating from the bench — but assessing that claim requires, at the least, the ability to link opinions to individual decision makers.