The next industrial revolution


If you Google “next industrial revolution,” you’ll find plenty of candidates: 3D printers, nanomaterials, robots, and a handful of new economic frameworks of varying exoticism. (The more generalized ones tend to sound a little more plausible than the more specific ones.)

The phrase came up several times at a track I chaired during our Strata + Hadoop World conference on big data. The talks I assembled focused on the industrial Internet — the merging of big machines and big data — and generally concluded that in the next industrial revolution, software will take on the catalytic role previously played by the water wheel, steam engine, and assembly line.

The industrial Internet is part of the new hardware movement, and, like the new hardware movement, it’s more about software than it is about hardware. Hardware has gotten easier to design, manufacture, and distribute, and it’s gotten more powerful and better connected, backed up with a big-data infrastructure that’s been under construction for a decade or so.

All of that means it’s an excellent way to extend the reach of software into the physical world, so people who have spent their lives in software are turning toward hardware now, hoping to build little rafts that will carry their code out of the comfort of the server room and down the unexplored rivers of the physical world.

The problems of the industrial Internet are particularly interesting because they require an enormous amount of domain knowledge in addition to clever software thinking. Our first speaker at our Strata + Hadoop World Industrial Internet session, Daniel Koffler, described aluminum smelting pots that use 600,000 amps of current — enough to disable electronic equipment and magnetize cars nearby. Our second speaker, Ami Daniel, described the lengths that smugglers and savvy merchant captains go to in order to obscure the data streams that come from oceangoing ships, and the skepticism and precision that his team uses to outsmart them.

In my closing panel with executives from Accenture, GE, and Pivotal, we spent the most time talking about integration and skills — how to draw together a lot of experts to work on extraordinarily complicated systems. If you approach these kinds of problems unilaterally as a software generalist, you won’t get very far.

For a few more thoughts on the next industrial revolution, I encourage you to watch my colleague Jenn Webb interview Nate Oostendorp, a co-founder of Sight Machine (and another speaker in my industrial Internet program). Sight Machine uses computer vision and other software techniques to help factories and other physical environments improve their operations. (Full disclosure: O’Reilly’s sister firm, O’Reilly AlphaTech Ventures, is an investor in Sight Machine.)

The Strata + Hadoop World All-Access Pass includes the Industrial Internet Day all-day session and all sessions in the Connected World track. — get your pass here.

Cropped image on article and category pages by Markus Grossalber on Flickr, used under a Creative Commons license.

Go to Source

Tags: , , , ,

New computing model could lead to quicker advancements in medical research, according to Virginia Tech

With the promise of personalized and customized medicine, one extremely important tool for its success is the knowledge of a person’s unique genetic profile.

This personalized knowledge of one’s genetic profile has been facilitated by the advent of next-generation sequencing (NGS), where sequencing a genome, like the human genome, has gone from costing $95,000,000 to a mere $5,700. So, now the research problem is no longer how to collect this information, but how to compute and analyze it.

“Overall, DNA sequencers in the life sciences are able to generate a terabyte–or one trillion bytes–of data a minute. This accumulation means the size of DNA sequence databases will increase 10-fold every 18 months,” said Wu Feng of the Department of Computer Science in the College of Engineering at Virginia Tech.

“In contrast, Moore’s Law (named after Intel co-founder Gordon E. Moore) implies that a processor’s capability to compute on such ‘BIG DATA’ increases by only two-fold every 24 months. Clearly, the rate at which data is being generated is far outstripping a processor’s capability to compute on it. Hence the need exists for accessible large-scale computing with multiple processors … though the rate at which the number of processors needs to increase is doing so at an exponential rate,” Feng added.

For the past two years, Feng has led a research team that has now created a new generation of efficient data management and analysis software for large-scale, data-intensive scientific applications in the cloud. Cloud computing is a term coined by computing geeks that in general describes a large number of connected computers located all over the world that can simultaneously run a program at a large scale. Feng announced his work in October at the O’Reilly Strata Conference + Hadoop World in New York City.

By background to Feng’s announcement, one needs to go back more than three years. In April of 2010, the National Science Foundation teamed with Microsoft on a collaborative cloud computing agreement. One year later, they decided to fund 13 research projects to help researchers quickly integrate cloud technology into their research.

Feng was selected to lead one of these teams. His target was to develop an on-demand, cloud-computing model, using the Microsoft Azure cloud. It then evolved naturally to make use of the Microsoft’s Hadoop-based Azure HDInsight Service. “Our goal was to keep up with the data deluge in the DNA sequencing space. Our result is that we are now analyzing data faster, and we are also analyzing it more intelligently,” Feng said.

With this analysis, and the ability of researchers from all over the globe to see the same sets of data, collaborative work is facilitated on a 24/7 global perspective. “This cooperative cloud computing solution allows life scientists and their institutions easy sharing of public data sets and helps facilitate large-scale collaborative research,” Feng added.

Think of the advantages of oncologists from Sloan Kettering to the German Cancer Research Center would have by maintaining simultaneous and instantaneous access to each other’s data.

Specifically, Feng and his team, Nabeel Mohamed, a master’s student from Chennai, Tamilnadu, India and Heshan Lin, a research scientist in Virginia Tech’s Department of Computer Science, developed two software-based research artifacts: SeqInCloud and CloudFlow. They are members of the Synergy Lab , directed by Feng.

The first, an abbreviation for the words “sequencing in the clouds”, combined with the Microsoft cloud computing platform and infrastructure, provides a portable cloud solution for next-generation sequence analysis. This resource optimizes data management, such as data partitioning and data transfer, to deliver better performance and resource use of cloud resources.

The second artifact, CloudFlow, is his team’s scaffolding for managing workflows, such as SeqInCloud. A researcher can install this software to “allow the construction of pipelines that simultaneously use the client and the cloud resources for running the pipeline and automating data transfers,” Feng said.

“If this DNA data and associated resources are not shared, then life scientists and their institutions need to find the millions of dollars to establish and/or maintain their own supercomputing centers,” Feng added.

Feng knows about high-performance computing. In 2011, he was the main architect of a supercomputer called HokieSpeed.

That year, HokieSpeed settled in at No. 96 on the Top500 List, the industry-standard ranking of the world’s 500 fastest supercomputers. Its fame, however, came because of the machine’s energy efficiency, recorded as the highest-ranked commodity supercomputer in the United States in 2011 on the Green500 List, a compilation of supercomputers that excel at using less energy to do more.

Economics was also key in Feng’s supercomputing success. HokieSpeed was built for $1.4 million, a small fraction — one-tenth of a percent of the cost — of the Top500’s No. 1 supercomputer at the time, the K Computer from Japan. The majority of funding for HokieSpeed came from a $2 million National Science Foundation Major Research Instrumentation grant.

Feng has also been working in the biotechnology arena for quite some time. One of his key awards was the NVIDIA Foundation’s first worldwide research award for computing the cure for cancer. This grant, also awarded in 2011, enabled Feng, the principal investigator, and his colleagues to create a client-based framework for faster genome analysis to make it easier for genomics researchers to identify mutations that are relevant to cancer. Likewise, the more general SeqInCloud and CloudFlow artifacts seek to achieve the same type of advances and more, but via a cloud-based framework.

More recently, he is a member of a team that secured a $2 million grant from the National Science Foundation and the National Institutes of Health to develop core techniques that would enable researchers to innovatively leverage high-performance computing to analyze the data deluge of high-throughput DNA sequencing, also known as next-generation sequencing.

Tags: , , , ,

How Industry Giants Can Create Corporate Breakthroughs

Most large corporations will admit to struggling with innovation. But in reality most companies, particularly those that manage to last for any reasonable period of time, do day-to-day innovation extremely well. After all, your laptop (if you still use one) is much more reliable than it was a decade ago. Your television picture quality is significantly better. Your cellphone sounds clearer and drops fewer calls. Your shampoo leaves your hair feeling cleaner. Your toothpaste leaves your mouth feeling that much fresher.

Where companies struggle is with the breakthroughs that reinvent existing categories or create entirely new ones. It’s not like large companies never manage to do it. Apple spent most of the past 10 years riding successive waves of breakthroughs. turned its own internal IT capabilities into a multibillion-dollar cloud-computing offering called Amazon Web Services. And Nestlé has created a similarly large business of coffee devices and related consumables under its Nespresso brand.

But study the stories of these and related corporate breakthroughs, and it often seems that success traces back to a large dose of serendipity or the heavy hand of a charismatic founder. And there are plenty of stories of big bets that ended up disappointing. So it’s no surprise that one of the most frequent questions senior executives ask us is how to increase the odds that their big bets on breakthroughs will pay off.

From our studies of corporate breakthroughs and our own experience in helping corporate giants such as Medtronic rethink the pacemaker market in India, Walgreens transform its corner drug stores into a disruptive mechanism to treat patients suffering from chronic conditions, and a Fortune 100 financial services company reinvent private banking, we’ve come to understand that large corporations have the best chances of successfully breaking through when three ingredients come together:

  • When the company focuses on a latent job to be done. A latent job is an important problem that customers really have but can’t readily articulate. For example, a decade ago, it’s unlikely that small-business owners would have told you that they needed a flexible way to host data and applications, one that preferably turned the fixed cost of computer hardware into a variable cost of renting capacity. But that’s exactly the job Amazon realized so many of them needed when it developed its “elastic cloud-computing solution.” That’s not as exotic a bet as you’d imagine when you consider that just about every business owner is always looking for increased flexibility and opportunities to make fixed costs variable.
  • When the company rides an enabling trend. An “enabling trend” is some technological or societal shift that makes it feasible to address the latent job. The increased availability and affordability of high-speed Internet bandwidth, for example, enabled Amazon’s own technological innovation in cloud computing to reach wider swaths of the market. Spotting transformational trends early can increase corporate confidence in bets on breakthroughs.
  • When the company takes advantage of its own catalytic capabilities in developing its offering. Corporations can’t hope to innovate faster than the hordes of start-ups that nip at their heels. But they can innovate better than those start-ups if they take advantage of a unique capability, such as a hard-to-replicate asset or market access earned over decades of operations. Amazon leveraged the infrastructure it had built to power its own IT systems to provide its unique offering.

Because these ingredients are rarely obvious, remember sage advice by former Procter & Gamble executive and current Innosight advisor David Goulait: If you want to do something different, you have to do something different. Engaging the usual suspects following the usual process using the usual tools almost by definition won’t do anything unusual.

Developing a breakthrough idea will never be a paint-by-numbers exercise. But this should not stop large corporations from redoubling their investments in breakthroughs. For as much as the world extolls the virtue of start-ups, large companies are uniquely positioned to address global challenges such as making health care more affordable, feeding the world’s surging population, or dealing with challenges resulting from rapid urbanization. While it isn’t easy, it is worth the effort.

Go to Source

Tags: , ,

Cloud computing architecture

The term cloud is now widely accepted as a remote data centre that houses a massive network of computers and serves as a central repository of data and that provides all kinds of web services. However, to most people, technical and non-technical people alike, the cloud is merely a black box. In this paper, we provide an overview of how the computers in the cloud are organised to support the requirements of performance, scalability, reliability and availability, while keeping down the cost of physical hardware.
Go to Source

Tags: , , ,

Entrepreneurial tweaking: An empirical study of technology diffusion through secondary inventions and design modifications by start-ups


Purpose – Existing theories of innovation posit a split between incremental innovations produced by large incumbents and radical innovations produced by entrepreneurial start-ups. The purpose of this paper is to present empirical evidence challenging this foundational assumption by demonstrating that entrepreneurs play a leading role, not a subordinate role, in sourcing incremental innovations through secondary inventions and design modifications. Read more

Tags: , , ,

Entrepreneurial tweaking: An empirical study of technology diffusion through secondary inventions and design modifications by start-ups


Purpose – Existing theories of innovation posit a split between incremental innovations produced by large incumbents and radical innovations produced by entrepreneurial start-ups. This study presents empirical evidence challenging this foundational assumption by demonstrating that entrepreneurs play a leading role, not a subordinate role, in sourcing incremental innovations through secondary inventions and design modifications. Read more

Tags: , , ,

Governing economic growth in the cloud

Gross domestic product (GDP) can be boosted by cloud computing, the system in which remote computers on the Internet are used to store, manage and process data rather than the users’ local machines. A report to be published in the International Journal of Technology, Policy and Management suggests that governments should collaborate to boost the adoption of cloud computing internationally. Read more

Tags: , ,