Big data isn’t just about the size of the data set: The discovery of new data sources is also important. There’s web data, sensor data, location data and, now, there’s artistic data. No, not data about the properties of the world’s masterpieces, but data about the actual strokes we use while were drawing.
Two studies, both being presented at this week’s SIGGRAPH conference in Los Angeles, have demonstrated that it’s possible to learn a lot about how people draw if you just have the right data. In one case, it was a team from Carnegie Mellon University and Microsoft Research having subjects play a cross of Pictionary and Wheel of Fortune on their iPhones in order to generate data. That study is particularly interesting, if only because of how it took advantage of the iPhone’s ubiquity in order to crowdsource data generation and ended up with a data set that now contains more than 17,000 drawings.
Subjects played a game called DrawAFriend that had them trace images of celebrities or mutual friends with their fingers. Once they were done, the drawing was presented to the other player stroke by stroke, and he or she guessed the letters in the subject’s name based on their confidence in who they’re looking at. The fewer guesses it took someone to guess the subject, the better those strokes were scored for the purpose of comparing them.
The researchers then developed a correction method that can account for bad form or fat fingers without the user ever knowing his strokes are being fixed. Going forward, the researchers are looking at all sorts of new ways to analyze the behavioral data they’ve collected (e.g., when users used the “undo” function, or in what order they made their strokes) and also are considering how to automatically make strokes not only accurate, but also aesthetically beautiful.
Another study comes from Disney Research, which had artists use a stylus pen to sketch portraits based on photos of 24 different people. They drew four sketches of each person, although they had less time for each attempt (270, 90, 30 and 15 seconds, respectively). The researchers collected about 8,000 strokes from each of the seven artists involved.
Data analysis that can identify discrepancies between the geometric properties in the photos and artists’ sketches (e.g., consistently narrow eye placement or large jawlines) or could help them identify bad habits they need to correct, the researchers noted. The data also helped the researchers create a program that can mimic individual artists’ tendencies to produce sketches similar in appearance to what they would draw.
As cool as this type of research is as an example of what’s possible when we can capture and analyze nearly any type of digital data, I can see how someone might question its utility. Maybe a company like Disney could turn its techniques into an engine for mass producing those straight-to-video sequels to its hit movies, or someone could create educational software to help aspiring artists get past some of their bad habits.
But otherwise it seems like the unique aspects of an artist’s work is what makes it interesting. If an algorithm is correcting or optimizing by strokes, can I really call the finished product my own work? And, as the Disney researchers point out, variations are sometimes more a matter of individual style than of incorrect replication.
The fact that we can now collect and analyze this type of data at this type of scale (in the case of the Carnegie Mellon project) is important, and I’m sure there are plenty of applications for artistic algorithms that I’m just not seeing. However, it seems there are some things we might not need to quantify just because we can — and art might be one of them.