Q

Home » Work » LOKI’S Toolbelt – Visual Basic Progress – Stardate 92652.1

LOKI’S Toolbelt – Visual Basic Progress – Stardate 92652.1

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 45 other followers

I’m currently working on the User Interface for Loki’s Toolbelt, my computer hacking espionage toolkit.

I first started mocking up a screen using cryptic ‘Borg’ symbols for the menu navigation.

As can be seen below

BorgMDI

On the left hand side the words translate to ‘Watch”, “Access”, “Prank”, and “Exit” at the end.

Now as i was playing with ‘look and feel’, I just realized – I’d become a total asshole if this is what I was staring at everyday while messing with people.

But wanting to ‘break’ from the traditional, COLD and Mechanical Windows look and feel – and to inspire my own imagination to be more playful and futuristic in thinking.

I looked at LCARS – which stands for Library And Computer Retrieval System.

This is the same interface used in most of the Star Trek tv shows and movies.

Here’s an example:

holodeckprogram

Now since my goal is to build the tools to build a real life holodeck, I am thinking with the ‘reuse of my control and visual libraries’ hat on. I am ‘designing’ my controls based on this more playful interface. I am ‘designing’ my controls based on this more playful interface.

With this, when I say controls – I am using Visual Basic 6.0, my language of choice to design this with, because it allows for easy integration between code and user interface – and also allows me to create a reusable control library.

Here’s how it works:

Surveillance

When Visual Basic starts up, I have a list of controls to choose from on the left hand ‘toolbar’.

Now in this case, I have created a ‘model’ control, a title bar named ‘Surveillance’ which leverages extensive math to draw the physical features of the control.

Now on the right hand side of the screen is the vbp ‘project’ screen, which includes all the files associated with my visual basic project. Since this is a test application and I am merely doing proof of concepts, I am not a stickler on any standard naming, so it’s messy.

Here’s that screen:

SurveillanceDesign

On the righjt hand side. you can see the control I have created for the title bar, the name LOKITitlebar.ctl. And EVERY control, whether it’s a Visual Basic control or one I have created – has Properties associated with it. Properties can be anything from font name, forecolor, backcolor, and so on.

For instance, here’s the same title control – with a few color and text changes which quite literally took a day to initially create the code for but 30 seconds to change:

QWasHere

Now my goal is to build a real life holodeck and starship, so I want to make sure that these base controls I have hand coded are not just useful and pretty – but rock solid.

That is – are they engaging to play with, fast, and easy to make custom modifications to?

Now as I looked at this screen:

Watch Selected

I realized. This shit’s ugly.It’s too much like the Borg, mechanical, cold, dreary, and not visually engaging.  And there’s simply no sprucing it up no matter what font I would use in the toolbar:

Watch Selected WithNewFont

But then I realized I had something pretty interesting with the background I chose that has depth, and I thought…  What if I leverage a 3d look and feel based on the Star Trek 2d interface, and create reusable controls which have depth?

So first, I got to looking for Star Trek type futurtistic designs, and landed on this magnificent screen, and started thinking – do I really need to do 3d?

FullScreenExample

I started diving into my code which redraw a simple title bar, and for something like drawing the ‘left end cap’ (the 180 degree rounded portion of the title bar), a really simple 15 lines of code to handle that, as depicted just after the ‘select case/Case LeftAndRight:’ below:

LOKIDesign

And I started to think: ok, look at the complexity of the single screen involved with ‘Science’. Do I really want to have to hand code all the graphical components by calculating the math out for each and every angle and curve?

The answer’s not only a no, but a fuck no.

Back to the proverbial drawing board.

Now I started re-evaluating my options, and in this evaluation process, I had already learned a couple things:

  1. I’d keep things simple and maintain a user interface based on English language sans cryptic design. Easy, intuitive, playful and aesthetically appealing  – just made sense.
  2. I’d leverage 3d OpenGL based controls from the start, and in the process look for elegant methods to ‘save and retrieve’ ‘drawn’ menu items.

Now making these decisions eased things considerably, and I immediately created something called an OpenGL control based on previous code I had done with it.

Now the benefit of OpenGL is this: I KNOW I am creating a holodeck – a real life Virtual Reality simulation which leverages drawing and manipulation of objects in 4 dimensions x, y., and z coordinates + time. OpenGL – which is a standardized open source graphics language for manipulation of objects in 4 dimensions, stands simply for Open Graphics Language.

And since I have been a corporate programmer most of my life, It only MAKES sense that I start teaching myself to program in this OpenGL environment anyways. .

Win/Win, right?

Now since I was settling on a more professional and scientifically based LCARS, I had to understand it’s deficiencies:

It is cartoonish, and no matter the screen shot you find, it’s nearly always two dimensional.

Put specifically, there are never any screens in ANY of the “fictional” Star trek references feature anything but two dimensional images. Sure, there’s the ‘appearance’ of static 3d imagery in the show, but NEVER does anyone actually zoom and pan and look around an object in three dimensions.

This stands in stark contrast to the information I will be presenting. To me – particularly from a professionalism standpoint – the necessity to ‘take my job seriously’ as a mischief maker without being an asshole about it – considering a traditional Star Trek look and feel belies the necessity to take the controls and functionality of the systems I am building seriously.

Particularly of importance if I do decide to sell this one of these days.

Now let’s say I implement a ‘find all computers in my vicinity range’ function…

The television show Person of interest has screens where a computer system manages locating these people in lightning fast times.

Screenshot_2012-12-07-02-05-23[1]

If I were to be building the Person Of Interest functionality, I would need to allow for dynamic creation of objects – and controls – and 3d data.

Why? To create trees. To create clouds of data. To create interlinking references much like Person of Interest does only I as an operator can manage this information and understand it.

At my slow processing speed 😉

So I first created an OpenGL control, and ;plotted 6 points on a flat axis.

Here’s what it took to create a little blue… what is it with 6 sides? With OpenGL:

GLControls

Now in the course of this development, I quickly realized how messy the code was already getting to draw vertices. Here’s ONE simple point being translated from physical presentation to OpenGL WITHOUT a z axis translation:

SingleVertice

In layman’s terms, what I did was create a collection of Vertices for every point I was drawing on the screen. now since opengl ‘viewports’ leverage a ‘centerpoint’ of x=0, y=0, z=0, with a ‘normal’ viewport, the edges of what’s visible extend from x=-1, y=-1, and x=1, y=1 (disregarding z).

ON a flat plane, this means I have to convert every point on the screen to an OpenGL 3d point for ‘rendering’ and making it appear 3d.

Make sense? Probably not the way I am discussing it.

Now what I realized was what I first suspected – working with 3d is going to be a friggin nightmare for hard coding because if I have one single point off, that means I’ll have to rebuild the entire application.

Fuck that. Google ‘rebuilding applications in visual basic’ for discussion on this subject and the complexities involved..

Now many 3d tool makers leverage a file format specifically for drawing ‘in 3d’.

So not wanting to reinvent the wheel, I checked into file formats for Maya, 3D Studio, Blender, and more.

And what I learned is this:

I am building functionality based on my idea of 3d.At first, it might be nothing more than a novelty. Toolbars and nifty controls. Then it gets to be useful with Point cloud facial recognition and item and license plate identification. Very quickly it becomes imperative I have great control over my coding environment, and I understand it thoroughly.

The chief problem I have had so far in working in ANY 3d environment has been a total lack of intuitive nature to it’s use, regardless of the implementation.

Take for instance, Blender, here’s a screenshot:

Blender

Now let’s say I want to size that cube that appears on the screen when you first start up to be a rectangle.

Do you know it took me nearly 3 hours to figure out how to change that?

I have yet to figure out how to be precise with it. Blender’s obscure reference system starts with scale = 1 and if you want it to stretch to the edge you put in 8 for the given axis.

Now that’s intuitive, right? Not.

So designing would flat out suck with Blender. And I was finding the same problem with the other programs – whether it was Maya, 3Ds, Autocad, and more. The file formats alone are something I have to leverage ‘all the material out there’ to backwards engineer.

To settle on one is lunacy and a dependency I dont want to have.

Here’s a snapshot of a Blender file format – the page appropriately titled – “The Mystery Of The Blender File Format”:.

BlenderFileFormat

This is NOT to say these design programs are not amazing programs.

But my needs are pretty unique. NOT a one integrates easily with ‘real time AND design mode ‘on the fly’ changes which I will be requiring.

For instance. Let’s say I am creating a holodeck program leveraging hand gestures and voice commands. Standing within a holodeck and pointing my finger out in front of me at a sandy textured ground area – and saying “Holodeck, Place a thorny bush here”, with the program being intuitive enough to understand my command relates “place” (Prepare for object placement ‘code’) “Thorny Bush” (Object to look up)  and “here” (Physical location, obtained by observing me and typical hand gestures).

So locating that object – it’s as easy as doing a Google search of 3DS or 3D format files.

I might limit my library to ‘working sets’ for ease.

Placement location, leveraging a Kinect – which obtains 3D x/y/z positioning is easy and doable with present day technology.

But there’s some problems this scenario presents with precanned objects:

Let’s say I want to dynamically retexture that plant or recolor it, or age it.

Now an ‘aging’ mechanism or information simply won’t be available with a static 3DS objects. but the original design MAY be nice to leverage those objects created from static libraries.

So what became clear was this – adding ‘my features’ – such as aging, discoloration, textures, and more interesting features a holodeck might have AI behavior scripts, pathing within the virtual environment, interactions and events – dynamically – I need to think – from the getgo – about building reusability and flexibility to extend script based behaviors quite literally into my 3d implementation and designs.

And what I was finding was that I would spend MORE time trying to ‘backwards engineer’ these foreign interfaces to figure out how to get them to meet my needs than I would if I were simple to roll my own.

I might as well become a graphic designer if that’s the case.

So what I ended up doing was ‘pushing’ the drawing code out to a flat file – my own .. I wont say proprietary – xml format.

Here’s the file format i am beginning to work with to draw that 6 sided object, leveraging simple XML 3.0 format (I like 3.0 the most)

XMLCODE

Now it doesn’t take a rocket scientist to understand the information above, at least I dont think it does – -as I have 6 specific ‘points’ I am drawing, and if x,y,z coordinates are 0,0,0  IN THE MIDDLE, then in a ‘control’ scenario this quite literally is ‘read in’ by my interpreter and draws this image:

6SidedFigure

Now two things. The ‘blue’ is arbitrary right now. So this is the one ‘key area’ You know I will be updating this format to include.

And this is the code that ‘generated’ that image from the XML file:

Private Sub Draw()
    Dim oVertice As LOKIGLCommon.CVertice
    Dim oContext As CGLObject
    Set oContext = New CGLObject
    
    If (oContext.Create(Me) And Not m_colVertices Is Nothing) Then
        ' Blue - do the rgb conversion here
        glBegin GL_POLYGON
        
        glColor3f 0#, 0#, 1#
        For Each oVertice In m_colVertices
            glVertex3f oVertice.x, oVertice.y, oVertice.z
        Next oVertice
        
        glEnd
    End If
    Set oContext = Nothing
End Sub

So the next thing I did after drawing this was realizing – hey what if I want to draw multiple polygons here and have each be a different color (of course this will be an absolutely necessity for complex objects).

Clearly the way I have the color setting (glColor3F 0,0, 1) – sucks.

That’s when I realized – you know, I am going to be breaking this design over and over and over again based on new – particularly – dynamic – requirements (ie: I want a 3d push button that can actually push) and I will never get this thing done.

You know. The problem is I enjoy working with Visual Basic 6.0.

And Blender has it’s game engine, which does offer many of the features I am talking about.

The only problem is the environment and Python – not a language I enjoy working with.
Blender overall is horribly complex and for someone like me who is new to 3d development, the basics are hidden and/or obscure – I suspect because these packages assume you already have some kind of expert proficiency in 3d design.

Which I most definitely do not.

So what I am working on right now, after all this, is a paint program for basic 3d design.

I figure I can ‘flesh out’ the simple methods and my burgeoning file format.

Now when I first got into Visual Basic programming several years ago, I created a program, called “Graffiti” which was a paint program like screen saver which remembered everything you did and you drew it – in real time – as a screensaver.

I created my own format for that, which simply stored x and y position, starts and stops, and color changes.

Now I figure I can do the same thing with this:

I am creating a very simplistic 3d Paint program much like Windows Paint. With it, I can teach myself how to draw polygons, how to add color, texture, and lighting. And in the process learn what’s a best approach’ for storing it all in an xml format.

Here’s what MSPaint looks like:

MSPAINT

And here’s the mess of a start of what my 3D version of MS paint is looking like – with a screen shot of the copied ‘toolbar’ compared to the one I am building on the left.

3DPaint

SO THIS is the absolute foundational start.

My project list:

  1. Finish 3D Paint Program which looks EXACTLY like Microsoft’s Paintbrush and post this on ZDNet.
  2. Leverage this 3D paint Program (once finished) to hand design the LCARS like 3d interface objects such as title bar, buttons, and such – leveraging 3D elements .This should help ‘work out the kinks and bugs’ of the 3D paint program and my XML file format.
  3. Once Look and feel is nailed down and basic controls drawn out, then focus on actual mischief making functionality. I might ‘visit’ parts of this early based on UI design concepts I might have to take into consideration before I get too far along.

Scatterbrained? Perhaps.

Enter your email address to follow this blog and receive notifications of new posts by email.