Summary:

Even though a perception persists that machines can increasingly solve complex problems and process large amounts of data on their own, machine learning experts say humans still play a key role.

Structure Data 2013 Jan Puzicha Recommind Timothy Estes Digital Reasoning Scott Brave Baynote
photo: Albert Chau


Session Name: What Does Collaboration Among Humans And Machines Really Look Like?

Speakers:

S1 Announcer S2 Derrick Harris S3 Scott Brave S4 Timothy Estes S5 Jan Puzicha

ANNOUNCER 00:03

Now we have our first panel today. We have some people standing in the aisle way, if you could please make room for us others, so you could sit down and enjoy the show, that would be great. Just as a reminder. Up next you have our first panel of the day which is, What Does Collaboration Among Humans and Machines Really Look Like? Its going to be moderated by conference chair, Derrick Harris. Hes going to be talking with Scott Brave, Co-Founder and CTO of Baynote, Timothy Estes, CEO and Founder of Digital Reasoning, and Jan Puzicha, Co-Founder and CTO of Recommind. Please welcome our first panel to the stage.

DERRICK HARRIS 00:49

I think the previous presentation was actually a good introduction to the ideas were going to be talking about, although much less about crowdsourcing and much more about actually getting humans involved in the process of interpreting results of machine learning and feeding models, and this idea ofwe talk about data science, right? This idea of the technology and domain expertise in business people and everything. Before we dive in deep, I want each panelist to introduce themselves just to get a sense of where theyre coming from and the spaces they deal in.

SCOTT BRAVE 01:28

My name is Scott Brave, CTO and co-founder of Baynote. At Baynote, fundamentally we provide personalization and recommendation engines platforms for ecommerce customers websites. Customers like an Urban Outfitters or J Crew or even a 3M are using us to collect all of their user behavioral data, analyze that automatically, and then adapt the user experience to serve up that right piece of content or product service at the right time, to the right user.

TIMOTHY ESTER 01:57

Im Tim Estes, CEO and Founder of Digital Reasoning. I started this company a while ago on the idea that someday all software would learn, so its really cool a decade after that, just watch this panel and all these great speakers dealing with that issue of software learning. Digital Reasoning makes sense of large amounts of unstructured data. Converting human unstructured data, so textual data, into entities and relationships into a knowledge graph that could be used inside the enterprise. Been mostly into defense and intelligence area for many, many years now. Our profiles gone up a lot in the last couple years as weve gone into commercial sides and been engaged with some investment banks.

TIMOTHY ESTER 02:40

Were trying to, basically make everyone their own Google, and serve that up so that people can take their own personal data and make value out of it.

JAN PUZICHA 02:46

My name is Jan Puzicha. Im CTO of Recommind and also Co-Founder. At Recommind, we basically provide a platform for unstructured information that tightly couples indexing capabilities with machine learning approaches, allowing you to understand what you have in your data, extract structure, and use that for a wide range of different applications from risk compliance governance, e-discovery, all the way into knowledge management where we have our original rootsand looking into other areas, sales automation. Recommind also comes from recommendation engines, thats what I figured out early on. But in 2000, that was not really a good market.

DERRICK HARRIS 03:26

First of all, Im going to start with, when some people hear that term, machine learning the initial thought is this is machines getting data and acting. Then they kind of get this clich, kind of trite, but like the SkyNet taking over and everything. Thats not the case at all, at this point. A big part of machine learning is actually reining them in or using humans to make them smarter. I just wanted to start off, and Scott, you can kick it off. When you look at the role of humans in machine learning in terms of making models and algorithms and everything smarter, whats your take? Where do humans fit in at the most basic level?

SCOTT BRAVE 04:12

Youre right. A lot of times when we go out and talk to this people, theres this perception that really its all the machine doing it, you just throw a bunch of data in and magic is going to happen. The reality is that humans are involved in multiple layers, right? Even in the choice of what algorithms to use or apply and sort of feature creation. For example, a lot of that is taking human intelligence and crafting what the structure is within which the machine is learning. I think theres an opportunity further down the road as these algorithms are learning, to continue to have the humans contributing and not just constraining the learning problem but to actually have a true collaboration where the humans are saying, Ive got this idea. Machine, youre already learning and running and automating, but Ive got this new idea. Let me put this into the system. As a merchandiser in ecommerce and I launch a new product, I think I know something so let me input that. Humans really are involved all through, even though they dont realize it and in fact, can be involved more. I think thats a big place where the future is going to.

DERRICK HARRIS 05:06

So people arent justso machines dont know everything, it turns out. Theres still a role for people. I would imagine intelligence too, right?

TIMOTHY ESTER 05:14

Right, right. One of the issues encountered ismachine learning, obviously, is a huge field and there are a lot of people playing in it, even here in this conference. You can look at one of the principle separations between supervised and unsupervised approaches, right? The supervised ones are driven by a label source that are training, tagging, and even within the supervised domain, theyre massive diversity of techniques and algorithms there. But, you can deal with different sides of the problem. So, Scotts group without putting it in a box at all—we talked about this beforehand before coming out here. Its actually impressive to use strategies and teach a machine strategies, kind of a higher level knowledge on it. Theres also a really messy problem thats a lot of things we work on, which is how do you teach a computer to tokenize accurately and simplify in traditional Chinese? Because, you might be able to get a 95%, 96% tokenizer for English by just rating Chris Mannings book on statistical NLP and taking characteristics from that chapter, but that wont work for Chinese. Theres not white space the same way. You have to look at what technique is useful based on the kind of data, because the beginning of learning is turning data into features. And humans and algorithms have a lot to provide us in terms of figuring out what features are important. Whats the inherent structure in the signal that sorts the invariance of those features over time? Once we have a model out of that, thats when the higher-level thing can really go to work. Part of it is whats the model at an aggregate level and then whats the model inside the signal? Youve got different people solving at different levels of that problem. But, the human mind is so brilliant because were solving a cross set of the entire spectrum, right? We have a generation ahead of us of this software put together into an enterprise architecture that makes that viable for unstructured information, whether its machine generated or its textual.

DERRICK HARRIS 07:03

Jan, I know something Recommind works with is this idea ofyou work with EDS discovery and the legal field. I think there are two interesting things in that regard. One is like this fear of you have to have humans involved to some degree. The other part is figuring out what humans are better at and what machines are better at. You talk about if Im reviewing a stack like hundreds of thousands of documents in a lawsuit, lets say. Where a machine is better at spotting some stuff or where a human is better. Whats the interplay like there?

JAN PUZICHA 07:41

I think the real learning that we have done at Recommind is that its not really human versus machine. Its also not so much about the algorithm and the specifics around that, but its how you create a synthesis of that into a module or a system that allows you to implement workflows that feel very natural to humans. And where they suddenly dont feel like theyre basically playing with machine learning or playing against a machine, but it becomes an assistance. It becomes something that they can iterate with. There is a workflow on the line and what that really requires a much deeper understanding of the use cases that you have. And a deep understanding on how you make that interaction happen.

JAN PUZICHA 08:28

I have a machine learning background. I was very excited about algorithms. Im very deep in math. What we learned we had to spend our time with is much more usability, explanatory components, reporting, and creating a module that you can use for very different purposes. But it encapsulates that interaction between the two. We have applied a very successful EDS discovery use case scenarios. Pick governance, for example, where people do need to understand what they have in their large amounts of data and how they want to apply policies to it. Its fundamentally the same problem. Its fundamentally the same type of interaction that you need to have between human and machine in order to make that work.

DERRICK HARRIS 09:09

If youre building a system where humans and machines have to work together, is it a matter of presenting data and results in a way that people can digest and interact with? Or is it a way like you suggested, Scott, of letting humans into the process and kind of train and test things on the fly? How do you build some thats actually that interactive?

SCOTT BRAVE 09:35

I think there really is a huge user interface challenge here. Kind of the way that we like to think about is, if you take all of machine learning and say were going to provide interfaces for people to interface with machine learning, thats a huge problem. But if you narrow it down to specific domains or areas. Again, take ecommerce or the merchandisers role in the human machine collaboration, you can start to devise these interfaces in a way that map to how they think about the problem, how they want to express their expertise and knowledge. What we call their hints and hypothesis to the machine, right? They might have a hint that says, I think we can structure users this way or categorize users this way. I think thats right. I dont know, but this is the way that I think it works. Or, Ive got this hypothesis that if I present this here, this is going to happen. Really thinking about it from a use case perspective and at some level, then, the algorithms are secondary almost, right? Because you want algorithms that are both smart in a way that they learn automatically, but also merge well with those use case patterns in the ways that you want that interaction to happen.

TIMOTHY ESTER 10:35

Whats the building block, is the question you end up asking. Is it the outcome to the end user? Is it the way in which questions are framed? So questions that weve worked in for a long time are, who is meeting who, who is talking about what, when are they talking about it, where are they talking about it, and what action is going to come out of it? That means its ultimately entity oriented in our world. So, the interface followed that. It followed that if youre going to teach a computer how to figure out connections and relationships in Chinese source or in Urdu source or in other source, you have to create an interface that a native speaker can just go through and tag things and correct the default models that are shipped, based on their particular communication. One of the great banes in unstructured data has been how domain centric every solution has been. You would propose a model and some type of solution, a priority and you would try to use it as a target for the training. But in the post-Google world, we realize that pagering model has won out, which means the meaning is inside the data, and it has to emerge coming bottom up. Thats a little bit of ideology, but that doesnt mean you cant use the structured analogies, the knowledge you already have investments in. But realize that will always be getting dated really fast, right? It will always be going out of synch with your data, because the world is changing, the noise of your communication is changing, the way that signal is embedded inside of different things like Twitter, is changing. So in that layer, you want to have human assistance augmented by algorithms to scale it. Big data is really about augmenting a very finite and extremely limited resource of what you might call label source, that is the humans that are providing back what it means. Theres a lot of innovation to be done there and I think were all working parts of that.

DERRICK HARRIS 12:20

If we could dial backI mean one thing I think thats worth asking point blank, because we talked about humans and machines working together, is what are humans better at? What are machines better at? If you are even going to begin this process, that seems like a good, base level of knowledge to have, right?

JAN PUZICHA 12:44

In my experience, the humans define the use case and the problem. Obviously, theyre much better in that, because the machine has no idea on what the problem space is. One of the big challenges that you need to solve, in the interaction part, is that you need to make it easy for the human to actually teach the computer on what youre looking for and what youre trying to solve as a problem. That sounds easy but actually is very hard. If you look at search, for example. Its been known for a long time that expressing an interest in a keyword search is very, very difficult. Humans are very good in reading a piece of unstructured information and saying, Yeah, thats on topic, than they are about explaining why that is. That, fundamentally, is one of the things that you can use in interaction, is this human notion of, that is really important to me as the main signal that you try to use in machine learning.

SCOTT BRAVE 13:39

I think thats right. A lot of times we forget that the amount, even though it is a big data, the amount of data that the machine has access to pales in comparison to the amount of data were absorbing and have access to. Were building this intuitions and these holistic pictures in our minds, and we see these connections that a machine might not even have the possibility of seeing, because it doesnt have the right data. Figuring out what do you actually give as sources of data to the machine and how do you project that intuition?

TIMOTHY ESTER 14:07

There are probably three different areas where the machine opens up key differences from the human part of it. One is clearly scale, right? There will never be a human being, in the history or in future, that will consume and read billions of documents, unless its some kind of implant in Ray Kurzweils style, right? I hope that doesnt happen, because I dont want it to be my brain, right? I want to keep my identity as a human being. So, scale is never going to bewere already passed that in the human point. Thats why the defense and intel area, we exist because it was impossible to manually curate all the data. Were past that point. That flipped with the Internet, took us ten years to realize that. Now, its understood and being implemented. The second is the speed. Humans werent designed to be receiving information from thousands of things at once. We do in the EM spectrum and our eyes are evolved to be able to take that and make something useful out of that. Our sensors are evolved to take lots of signal and make objects out of it. But, were not really designed to have all these inputs at once at that speed. Thats a machine problem, so is the scale. And finally, because we cant create a unified model of knowledge as a human being, across that scale, the judgments that come out of the fusion of all that, the synthesis of it, is something only a machine can do. Now, I think were going to have a debate in culture and societyto not be technical for a second, about what do we do with that? Were going to have a Google-like model where it tells you what to do next, when you put all the data in one place? And we have to decide if we want to live in that world. Or, were going to have it where we have software that we own and technology that we own, that does it for us as an extension of us. I think there is a difference between the two. I think were going to see that play out, in the next decade, between a software–centric model, a personal empowerment model, and a collective model. That, to me, is the most interesting thing. Thats the SkyNet problem, right? You get a computer with intentionality that access to the data, the next thing youre looking for a robot coming back from the future.

DERRICK HARRIS 16:10

When we were talking about building applications or building systems where humans and machines need to interact, does it make a difference who the user is? Maybe the user is a business user in the retail setting, maybe its a marketing guy, maybe its someone in a regulated industry or a data analyst in the law firm or in the defense fund or its a lawyer. How does that affect how you build something? Maybe its someone from crowdsource, maybe youre crowdsourding language translation or something. How important is that?

JAN PUZICHA 16:47

I think it is of major important and I think its really the abstraction on the algorithmic perspective on the problem, from my perspective, you really need to hide away. And you need to take the users perspective of the problem and offer workflows, offer the possibility to easily interact in a very natural manner and the application of machine learning really becomes subtle, in that sense, in that you use it behind the scenes in order to solve a business problem. That ties back to what I said earlier on explanatory components, making sure that it becomes transparent to the user on why certain things come back and make it easy to understand and explain. I think thats of crucial importance.

SCOTT BRAVE 17:26

In some cases, you have multiple actors, right? Again, take the personalization recommendation space. Theres actually three actors in this collaboration. Theres the machine. Theres the expert merchandiser on the backend. But then on the front end, actually all of the users of the system are collaborating in this process. Theyre the information discoverers out there, connecting things together and they dont know theyre participating in that. But the machine is helping that crowdsource wisdom to occur.

DERRICK HARRIS 17:49

We only have a couple of minutes left , so I want to ask about taking this a step further. So, machines very good at scale, very good at spotting some correlations of ranking stuff or whatever, but we still have to answer why all this is? Whats the process? We talked about this to some degree, Scott, but the idea of asking for information in context, while the stuff is happening, to feed a more intelligent model.

SCOTT BRAVE 18:14

We do forget. We think about data sometimes as being inert, right? Its this dead thing that were just analyzing and bringing life into through our analysis. But the reality is, very often, theres a person on the other side of that data, and that person is query-able . In real time, collecting this data and building these models, you can ask questions and say, Hey, I think you were going to do this or I thought this was the right thing, but why wasnt it? Tell me some information. When you do that in context in asking questions, its perfectly acceptable. You dont want to slap a questionnaire at the beginning of an experience and tell me everything you know, but in the context of learning, its totally appropriate.

DERRICK HARRIS 18:40

Does that translate into?

TIMOTHY ESTER 18:15

Yeah. I think, for instance, we havent done a lot in sentiments and analytics until recently and the reason is, is a green, yellow, red dot doesnt tell you why someone cares. The facts of why theyre upset or they like you are far more important, right? Because that actually allows you to take action. So, weve spent a lot of time classifying things at an aggregate level where its a document or sentiment. I think as we get into the entities and relationships, the building blocks that structure unstructured data, theres a whole new set of possibilities. The real question of the enterprise is the why question, and I do believe thats the next two or three years in this space of unstructured data in the enterprise, is asking the why question not the what.

JAN PUZICHA 19:32

I would agree with that. Its really the users role, where is he coming from? Whats the driving his enquiry? That should really drive how we build systems.

DERRICK HARRIS 19:45

Great. I would love to dive deeper into this, but we are red-lighted so its time to go. Thanks a lot.

You’re subscribed! If you like, you can update your settings

firstpage of 2

Comments have been disabled for this post