By the way, the official name is DeepUI Phase 1: Game Studio.
Please subscribe for notifications about the INDIEGOGO campaign!
It is expected to start in March 2017. We also have a TWITTER .
Please also SHARE! Thank you!

Live and virtual time

Live programming means that we edit the program while it is running so we get instant feedback on every change. To do this we need to rewind, replay and loop time. We can record inputs and edit the program until it is right, or jump to the exact point in time when a variable changes. In general, time is visualized in space (timeline) so it is now static and can be understood and controlled without actually waiting out that time.


In DeepUI, functions and variables are visualized and they are interactive like physical objects, so we can manipulate the elements of the program directly. Abstract values (e.g. boolean) are visualized and can be directly manipulated too. Objects are shown as a network so you can see the context. By placing everything in a visual space, we can see immediately how a change affects the entire system.

DeepUI Phase 1: Game Studio

Phase 1 is for game creators. And also visual artists who want the power of programming.
Beside games, generative images, models, and animations (pixel and vector) can also be created with Phase 1. It will also include an asset store where users can sell what they make.

Product timeline

Phase 1: Games and art - DeepUI Phase 1: Game Studio

  • In addition to video games: vector and pixel graphics and animations. (These are all parts of a video game anyway.)
  • Also certain kinds of simulations possible: mainly for visualizations for now.
  • Target platforms: Windows, MacOS, iOS, Android, Web
  • Editor platforms: Windows, MacOS
  • Ability to sell games in the App Store, Play Store and through other means
  • Ability to sell components (types: parametric graphics, animations, interactive) in the built-in store
  • Built-in version control integrated with undo. That means that every edit gets recorded and significant ones can be given a name. Modify, undo, modify again will create a branch automatically so the original branch is not lost like with traditional undo.
Phase 1 . 1 : 2.5 D
  • 2.5D (layers, transparency) vector and pixel graphics
  • GPU shaders can be edited in a similar manner as shown in the video. Both vertex and fragment shaders can be edited with the appropriate visual editor.
Phase 1 . 2 : 3D
  • 3D graphics integrated seamlessly with 2D
Phase 1 . 3 : Support more Hardware
  • Support for platforms with digital pen like iPad Pro and others.
  • VR support with gloves. Preferably force-feedback gloves. These can make you feel like you are actually touching something.

Phase 2 (both parallel, in addition to games) - DeepUI Phase 2: Programming Studio

Phase 2.a: industrial design/simulation and manufacturing control
Phase 2.b: support for web and business applications

Vision and future

The long-term goal of DeepUI is to make us smarter by augmenting our intelligence using computers. Creating a more direct programming experience is just the first step. Here is why do it at all.

Faster innovation
We want to try out and implement our ideas faster. From a business standpoint, it means cheaper innovation. But besides quantity, quality is even more important. Some ideas are simply too expensive and risky to try out. If we push the cost below a certain threshold, genuinely new innovation is more likely to happen. And for us, the thought, that it is possible to try out even complex ideas quickly, is just simply exciting.

Systems should be more reliable so we should build programming environments in a way, that feedback, diagnostics and testing is integrated into the development process itself.

Connected AI
When it comes to AI, reliability and safety cannot be stressed enough. If an AI can learn and act independently, we can never fully understand it. The other way is that we focus on the communication (interface): we create the AI system as an augmentation of ourselves. That means we have to be able to reprogram it at will like we "reprogram" ourselves to learn a new skill. This whole approach depends on the speed of the interface. It is a very good start to push our current interfaces we use for programming to the limit first - so we can have a tool to develop the next generation of interfaces. Using this approach, our intelligence can keep up with the advancement of AI.