Creating Art From Data

Chaitanya Hegde
Ather Energy
Published in
5 min readSep 20, 2016

--

While most of the world is still recovering from the Pokémon GO fever, some of the gaming enthusiasts here at Ather have been keeping an eye on the much hyped “No Man’s Sky” (NMS). Billed as a space exploration game, it promises a potentially infinite gameplay — a universe of 18 quintillion planets, varied flora and fauna contained in each of them, ability to build your own spaceships, craft items and endless exploration. Frankly, which one of us hasn’t dreamed of being space explorers when we were kids?

Though the game itself has released with a lot of performance issues and to fairly mixed reviews in terms of depth of gameplay, it is still quite a leap as far as gaming technology and the future of graphics are concerned. Also for us, as engineers, the interesting question is — what does it take to build something so massive, without actually spending 18 quintillion years to make it? How does one play God to such a universe?

The answer: Procedural generation.

By generating worlds, plants, animals, textures and even sounds on the fly, rather than handcrafting each pixel, such a massive universe can be built with relative ease. This requires deterministic equations which create shapes and textures based on numbers fed into them, and in case of NMS, a pseudo-random number generator with a fixed seed (a phone number of one of the developers, apparently) is used to get these numbers. So for every user who runs the game, the same world is recreated using these equations. Each world generated from a different number can be programmed to have different characteristics — for example, finding creatures and terrain in a planet which is found nowhere else.

NMS came into some controversy during its development, when an article claimed it used something called the “Superformula” to generate terrains of these planets, which the developer later denied. But when an equation had a name as hyperbolic as “Superformula”, we had to explore what it was.

Superformula, surprisingly, is not some complicated partial differential equation. It is a simple variation of a superellipse — a superellipse being anything that is “like” an ellipse — a closed curve with symmetry about its major and minor axes.

Superformula equation

Despite its simplicity, it can produce an incredible diversity of shapes by just tweaking the parameters m1, m2, n1, n2, a and b. The formula was first proposed by Johan Gielis in a 2003 paper in the American Journal of Botany as a way to easily represent a lot of natural biological shapes.

“Generalizing the equation of the ellipse allows us to understand the mathematical simplicity and beauty of many natural forms differing only in parameter values”, Dr. Gielis wrote in his paper, “[Superformula equations] allow for a great reduction of complexity of shapes and provides new insights into symmetry, including non-integer symmetries”.

Given its versatility, Superformula has found its uses in fields other than biology. It has been used to reduce file sizes for symmetrical CAD geometries, find optimal cluster shapes while doing unsupervised machine learning, and of course, data visualization.

Since we found this fascinating, and thought you would too, we made an open source implementation of the formula in both 2D and 3D which you can play around with here or go check out the source code directly here.

Some screenshots of the output from the code

Whether using the Superformula or not, procedurally generating visuals and sounds from data is a pretty hot topic right now, given the several advantages it has — it does not require huge amount of memory to carry all the graphics assets, it is pretty easy on computation, and it gives us a nice way to correlate shapes and sound to data, among others.

Neural network based softwares which generate artworks, photo-app filters which give your photos a unique look, and websites which play sounds based on live events, are examples of these. Let’s call it Dataism — à la Dadaism — an art movement to create art out of data. Bringing out the uniqueness in a world where we have such large amounts of data to analyze is definitely a challenge.

With the Ather S340 being a smart, connected scooter, the vehicle will be live streaming data to the cloud during a ride. This data will be used for navigation, predictive maintenance, range estimation and to understand the nature of the riders. Understanding these metrics will help us tune the scooter to better suit each rider’s driving style, and also help inform the user on their riding style and the mistakes they make.

Each rider has a unique driving style — how they throttle, how intensely they turn, how their speed profile changes — all of which have varying effects on performance, range and battery life of the scooter. This uniqueness lends itself to be easily represented by procedurally generated elements.

We are planning to use interactive visualizations as a tool to present this data, and we are very excited to show this to you at our upcoming Experience Center in Bangalore. To emphasize the uniqueness of each user’s ride, we are going to have procedurally generated visuals, a signature for each rider. You can come in to take a test ride of the Ather S340, and once you are done, visualize your own ride and get your own unique ride signature.

A preview of these visuals is being shown to enthusiasts at our Ather Open House sessions — sign up to receive an invite to these events!

Originally published at www.atherenergy.com on September 20, 2016.

--

--