How deep the development of big data?

Big data requires open innovation, open the ability to extract from the open sharing of data and transactions, the value, and then to open basic processing and analysis platform, so data like blood in the body a long stream of data in the social, economic moisture data, so that more long-tail business data and innovative thinking by chemical reaction colorful, in order to create the golden age of big data.

Wall in the big data is destined to become a dead data, big data needs to open innovation, open the ability to extract from the open sharing of data and transactions, the value, and then to open basic processing and analysis platform, so data like blood in social data body in a long stream, moisture economic data, so that more long-tail business data and innovative thinking by chemical reaction colorful, in order to create the golden age of big data.

My big data research track

I did 4--5 years of mobile architecture and Java Virtual Machine, 4--5 years many-core architectures and parallel programming systems, recently 4--5 years are chasing fashion, first put things in recent years has been bigger data. Our team of big data research trajectory as shown below:

 

2010-2012, focus on the relationship between data and machine: horizontal expansion, fault tolerance, consistency, hardware and software co-design, while various clarify the calculation mode, from a batch (the MapReduce) to stream processing, Big SQL / ad hoc query, Figure computing, machine learning, and so on. In fact, our team is only part of Intel's big data research and development efforts, the Shanghai team is the main force Intel Hadoop distribution, Cloudera has become the biggest shareholder of Intel, do not do their own release, but optimization platform, open-source support and vertical solutions in the field continues to be the focus of Intel's big data research and development.

From 2013 began to focus on the relationship between the data and the people: for data scientists how to do distributed machine learning, feature engineering and unsupervised learning, for experts in the field is how to do interactive analysis tool for end users how to do Interactive visualization tools. Intel Labs research center at Carnegie Mellon University to support the United States made a GraphLab, Stale Synchronous Parallelism, the MIT research center made a big data analysis and interactive visualization SciDB, while China is mainly done Spark SQL and MLlib (machine learning library), now comes to the depth of learning algorithms and infrastructure.

2014 focuses on the relationship between the data and the data: Our original focus is on open source, open source later found only a part of open innovation, big data open innovation we need to do is open data, open and big data infrastructure open value extract capabilities.

Sea and externalities data Diablo

Here is a very interesting FIG yellow part is fossilized, i.e. not connected, no data is digitized, and the vast majority of the data inside this sea. Only sea level data (some people call it Surface Web) is the real that we can have access to the data, reptile can climb, search engines can retrieve the data, while the vast majority of the data is in the dark sea inside (respectively called the Dark Web), this part is said to account for more than 85% of the total data, they are isolated and which, in some companies, the government inside lying on the floor asleep.

 

 

The data in the data community, just as water is to the city or the blood is to the body. Since the birth of the city and the river is also subject to its nourishment, blood stagnation once the body also at stake. So-called social data of survival, we must let the data flow together, otherwise the society will lose a lot of important functions.

So, we hope to be able to generate chemistry data like "Wind Yulu a reunion" that. Mr. Ma proposed the concept of a internet +, Intel also has a large data X, the equivalent of big data by all walks of life. Addition, the multiplicative effect in the following figure, the data have a very wonderful effect is called externalities (externality), such that the data is useless to me but TA useful, he called me the poison of honey.

 

 

For example, financial data and data collision with electricity suppliers, arises as a small micro-loans as Internet banking; telecommunications and data to meet government data, can create value demographic aspects of urban planning to help people to live, work and play place; financial data and medical data together, McKinsey listed a number of applications, such as can be found potentially fraudulent; logistics data and electricity supplier in a data hash, you can understand the operation of each sub-area economy; logistics data and financial data to produce the supply chain financial, agricultural and financial data, and data can also be some chemical action occurred. Such as Google analytics out of a few people, the use of US Open meteorological data, the establishment of micro-meteorological model on each piece of farmland that can predict disasters, to help farmers insurance and claims.

So, leaving the data open road, so that data from different areas of the real flow together, blending together in order to unlock the value of Big Data.

About three concepts of open

1、数据开放

首先是狭义的数据开放。数据开放的主体是政府和科研机构,把非涉密的政府数据及科研数据开放出来。现在也有一些企业愿意开放数据,像Netflix和一些电信运营商,来帮助他们的数据价值化,建构生态系统。但是数据开放不等于信息公开。首先,数据不等于信息,信息是从数据里面提炼出来的东西。我们希望,首先要开放原始的数据(raw data),其次,它是一种主动和免费的开放,我们现在经常听说要申请信息公开,那是被动的开放。

Tim Berners Lee提出了数据开放的五星标准,以保证数据质量:一星是开放授权的格式,比如说PDF;其次是结构化,把数据从文件变成了像excel这样的表;三星是开放格式,如CSV;四星是能够通过URI找到每一个数据项;五星代表能够和其它数据链接,形成一个开放的数据图谱。

 

 

现在主流的数据开放门户,像data.dov或data.gov.uk,都是基于开源软件。英特尔在MIT的大数据科研中心也做了一种形态,叫Datahub:吉祥物很有趣,一半是大象,代表数据库技术,一半是章鱼,取自github的吉祥物章鱼猫。它提供更多的功能比如易管理性,提供结构化数据服务和访问控制,对数据共享进行管理,同时可以在原地做可视化和分析。

 

 

广义的数据开放还有数据的共享及交易,比如点对点进行数据共享或在多边平台上做数据交易。马克思说生产资料所有制是经济的基础,但是现在大家可以发现,生产资料的租赁制变成了一种主流(参考《Lean Startup》),在数据的场景下,我不一定拥有数据,甚至不用整个数据集,但可以租赁。租赁的过程中要保证数据的权利。

首先,我可以做到数据给你用,但不可以给你看见。姚期智老先生82年提出“millionaires’ dilemma(百万富翁的窘境)”,两个百万富翁比富谁都不愿意说出自己有多少钱,这就是典型的“可用但不可见”场景。在实际生活中的例子很多,比如美国国土安全部有恐怖分子名单(数据1),航空公司有乘客飞行记录(数据2),国土安全部向航空公司要乘客飞行记录,航空公司不给,因为涉及隐私,他反过来向国土安全部要恐怖分子名单,也不行,因为是国家机密。双方都有发现恐怖分子的意愿,但都不愿给出数据,有没有办法让数据1和数据2放一起扫一下,但又保障数据安全呢?

其次,在数据使用过程中要有审计,万一那个扫描程序偷偷把数据藏起来送回去怎么办?再者,需要数据定价机制,双方数据的价值一定不对等,产生的洞察对各方的用途也不一样,因此要有个定价机制,比大锅饭式的数据共享更有激励性。

从点对点的共享,走到多边的数据交易,从一对多的数据服务到多对多的数据市场,再到数据交易所。如果说现在的数据市场更多是对数据集进行买卖的话,那么数据交易所就是一个基于市场进行价值发现和定价的,像股票交易所那样的、小批量、高频率的数据交易。

我们支持了不少研究来实现刚才所说的这些功能,比如说可用而不可见。案例一是通过加密数据库CryptDB/Monomi实现,在数据拥有方甲方这边的数据库是完全加密的,这事实上也防止了现在出现的很多数据泄露问题,大家已经听到,比如说某互联网服务提供商的员工偷偷把数据拿出来卖,你的数据一旦加密了他拿出来也没用。其次,这个加密数据库可以运行乙方的普通SQL程序,因为它采用了同态加密技术和洋葱加密法,SQL的一些语义在密文上也可以执行。

 

 

针对“百万富翁的窘境”,我们做了另一种可用但不可见的技术,叫做数据咖啡馆。大家知道咖啡馆是让人和人进行思想碰撞的地方,这个数据咖啡馆就是让数据和数据能够碰撞而产生新的价值。

比如两个电商,一个是卖衣服的,一个是卖化妆品的,他们对于客户的洞察都是相对有限的,如果两边的数据放在一起做一次分析,那么就能够获得全面的用户画像。再如,癌症是一类长尾病症,有太多的基因突变,每个研究机构的基因组样本都相对有限,这在某种程度上解释了为什么过去50年癌症的治愈率仅仅提升了8%。那么,多个研究机构的数据在咖啡馆碰一碰,也能够加速癌症的研究。

在咖啡馆的底层是多方安全计算的技术,基于英特尔和伯克利的一个联合研究。在上面是安全、可信的Spark,基于“data lineage”的使用审计,根据各方数据对结果的贡献进行定价。

 

 

2、大数据基础设施的开放

现在有的是有大数据思维的人,但他们很捉急,玩不起、玩不会大数据,他不懂怎么存储、怎么处理这些大数据,这就需要云计算。基础设施的开放还是传统的Platform as a Service,比如Amazon AWS里有MapReduce,Google有Big Query。这些大数据的基础处理和分析平台可以降低数据思维者的门槛,释放他们的创造力。

比如decide.com,每天爬几十万的数据,对价格信息(结构化的和非结构化的)进行分析,然后告诉你买什么牌子、什么时候买最好。只有四个PhD搞算法,其他的靠AWS。另一家公司Prismatic,也利用了AWS,这是一家做个性化阅读推荐的,我专门研究过它的计算图、存储和高性能库,用LISP的一个变种Clojure写的非常漂亮,真正做技术的只有三个学生。

所以当这些基础设施社会化以后,大数据思维者的春天很快就要到来。

 

3、价值提取能力的开放

现在的模式一般是一大一小或一对多。比如Tesco和Dunnhumby,后者刚开始是很小的公司,找到Tesco给它做客户忠诚度计划,一做就做了几十年,这样的长期战略合作优于短期的数据分析服务,决策更注重长期性。当然,Dunnhumby现在已经不是小公司了,也为其他大公司提供数据分析服务。再如沃尔玛和另外一家小公司合作,做数据分析,最后他把这家小公司买下来了,成了它的Walmart Labs。

一对多的模式,典型的是Palantir——Peter Thiel和斯坦福的几个教授成立的公司,目前还是私有的,但估值近百亿了,它很擅长给各类政府和金融机构提供数据价值提取服务。真正把这种能力开放的是Kaggle,它的双边,一边是10多万的分析师,另一边是需求方企业,企业在Kaggle上发标,分析师竞标,获得业务。这可能是真正解决长尾公司价值提取能力的办法。当然,如果能和我们的数据咖啡馆结合,就更好了。

推荐阅读文章

大数据时代需要了解的六件事

大数据框架hadoop十大误解

年薪30K的大数据开发工程师的工作经验总结?

大数据框架hadoop我们遇见过的问题

 

Guess you like

Origin blog.csdn.net/tttttt012/article/details/91471385