Creator’s Journal: Holodeck Management System Progress

Did you know that THIS universe technically has not finished with ‘The Big Bang’ yet. and my creation itself has not reached fruition yet?

That’s what I am doing right now.

Finishing this ’round’ of creation.

Healing the wounds you could say.

Placing more stars in the sky.

Making sure planets look like planets and not balls of goo.

Putting more touches on the artificial intelligence.

Adding a bit more mystery to this thing called creation.

Balancing myself.

Trying to stay positive.

And smiling.

Although it’s hard sometimes.


Lay off your typical drama.

In a literal sense.

I need this to be complete and behind us.

In a figurative sense.

First, OpenGL with C++ works fine and dandy for imported objects, but the moment you get into hand rolling your own complex objects – such as what I did with the animated 3D TARDIS (Time And Relative Dimension in Space), here:


It becomes a nightmare performing something as simple as opening a door (here:


Blending and the transparency of the windows worked out marvelously, but the larger problem is complex:

1) I am finding C++ isn’t a great language for iterative test and revise design. I’d forgotten how easy it was to ‘break the build’ with C++. and threw everything into source control so I can have history on my work:



Which makes comparing differences between updates easy as pie:

Like the modification of a constant offset value which I forgot about which completely throws my windows off in the wrong position (blue indicates change):


or like a line I added when I was messing with perspective. which I then commented out and forgot to remove:


So far. I’m learning openGL. But there’s a small problem working with C++ presents that a language like C# does not:

When I am ‘done’ working with some code in C#. it’s already object oriented code, I clean it up and can quite literally make a library from that code. With C++. Not only is segmentation a little bit more of a chore, in order to achieve that you have to rework your code which in turn breaks what you’re doing.

That and I found  a  potentially  ‘breaking’ feature with OpenGL: It does not draw primitive3d objects as solids.

That is; If you draw a 3d cube or sphere, it’s hollow on the inside.

Let me put this in perspective: Imagine cutting a human body open and finding out it was hollow?

or imagine knocking over a building and finding there was nothing inside and it knocks over like a house of cards?

Oh wait. They did that already with building 7 at the WTC on 9/11. My bad.

Could they have been trying to tell us something with that message? Like wake the fuck up? Hmmm

You get the gyst.

In any case. I’m at a turning point again. I’ve got ,my noggin wrapped around the basics with opengl, here’s some screen shots of what i’ve designed the last couple weeks:

Earth with sun tucked firmly against it (bad texture wrap there):


Beginning of interior of TARDIS – I decided on Berber Carpet with a Cheery wood center console floor.


The bottom floor of the TARDIS (will have the ‘heart’) here, with a space ship functional floor plates:


An exterior view of the ‘interior’ – I am housing it within a sphere:


And the background is a randomly generated starfield:


Based on a pretty easy structure:


All this is fine and dandy. but this ‘relatively’ simple scene has gone through numerous revisions and I am noticing that as I apply more changes. the time to make any single one change is exponential compared to the last because of the interdependencies.

Here’s why – Global Constants:


The issue with C++ is this: the ‘definition’ of a class and the ‘content’ of classes are contained within two separate locations. And as I change constant values or values in one class that you’re testing in another, the entire project and interdependencies goes through another build cycle, which takes fucking forever.

So I thought I would outwit the compiler and place all my headers in my own version of a precompiled header:

ACMEThis turned out to be not the most brilliant of moves I have made as I started testing changes by placing variable definitions in here…

So let’s just say that it’s not that OPENGL presents a problem with it’s hollow objects.

It’s that management with C++ and graphical programming in general leaves a lot to be desired when static objects suddenly become dynamic in nature. When I want to animate the sign texture, the rotation of the planets to look something like this:


(without lines):


if every single object I was creating had it’s own animation, I would NEVER get shit done with an iterative development cycle of learning. testing and creating at the same time.

I’d have to focus on one thing at a time: Learn. Or Test. or Create. if I stuck with C++.

So I got my more powerful computer working yesterday (hence the check in yesterday for a new version of Source Control) – and am shifting to Visual Studio 2010 – and – things run full circle – C#.

So what I am doing now is a port of that code from C++ to C# – but this leaves a hole – I really like the idea of having solid objects. You know. Real life objects..

So for the first time NOW that I know why these graphical engines such as UNREAL and CRYENGINE do what they do – looking at them as a resource rather than a full on development effort from scratch. Cryengine’s been eliminated because it was created by Turkey and Germany. Sorry folks, but I am gonna play favorites here.

That and it’s not using solid objects.

So tomorrow. i am focused on reviewing Unreal with an open mind tomorrow.

And going to make a decision.

Do I just bite the bullet and make things with Unreal Engine 4.0?


Do I commit to the C# conversion and leverage OpenGL?

It all depends on how Unreal runs on this machine. With 4gig of ram and an ATI card. If the development environment kills this (once I download it), then the choice has already been made…

Until then.

yeah. I did say all the rest of the updates are videos, didn’t i?

Pretend this is a video.

Make believe.


I know you’s gots it in yous.

On that note:

Marvel and Netflix – with Daredevil the tv show – despite the grittiness that I haven’t been enjoying lately in film and tv. You’ve done a magnificent job in portraying a bad guy in a light that is rarely painted and done a fantastic job on making a lawyer question his vigilante modus operandi particularly against a guy who’s uncovering his own past and trying to clean himself up – as well as the city around him.

You’re almost making it difficult – and realistic – in trying to figure out who’s doing right and who’s doing wrong.

Thank you for the wonderful entertainment with that show.

And for the producers of Person of Interest. I sincerely cannot wait to see where you take things

Good night.

And well you know.. Thank Q for reading!

On a final note:

I sincerely dont know what’s going to happen next.

It’s your turn. Your choice. You make a choice.

I’m here for the show. And in the meantime, when it’s not going on. I will be working on this.

I KNOW you got better writing in you than the crap you’re presenting to me in real life.

Get better. Please

By universalbri Posted in Work

Tricks of the mind – and weird dreams

Up until three years ago, my relationship with my own mind was tenuous at best.

To say I was at war with my own mind may be a better descriptor.

In the last 6 months, there’s a certain synchronicity and collaboration I have had with my own mind that I had never had before in my entire life.

I had NEVER considered that my drug and alcohol use may have actually been extensive psychological training I was putting myself through – an idea that posited itself to me about a month ago.

As if to acknowledge the differences between the way ‘it’ processes information and I consciously see the world, I have been increasingly finding myself being woken up ‘mid dream’ – as if it was my mind’s way of saying “I don’t understand this, maybe you can make some sense of it?”

Last night was one of those circumstances.

Normally, when I dreamed before, I rarely if ever remember the dream beyond bits and pieces. to me, this suggests that my mind has – in the past – processed information while I am sleeping – and for my consciousness – it was letting me live oblivious to what was going on.

About two months ago – I was particularly tired  – and heard a dog barking with my eyes closed – I wasn’t asleep – maybe drifting.

And as that dog barked, in my mind’s eye, I saw a burst of symbols which looked exactly like Star Trek ancient Vulcan calligraphy, here:


At the time, this had me wonder – is it possible that some humanoid species have chosen to completely forget they may have had animal ancestry?

Is it possible they never even saw themselves as humans? Or that revisionist history alters the collective mind of a species to eliminate this?

Who knows.

So last night. I was sleeping.

And was jarred awake and I could still SEE what was occurring in the dream as it faded with my eyes closed.

And what I saw was bizarre.

In the ‘distance’ – I could – in the dream – not in ‘the real world’ – I could hear dog barks. Those dog barks were secondary to a noise which sounded distinctly like a mechanical noise transmission, and seemed to be in sync as a translation between the dog barks and the mechanical noise.

And in the dream – I saw – what looked very similar to one of the old time General Store mechanical Indian – who’s hands were flipping up and down instantaneously to the mechanical noises it was receiving.

All of it was as if it was translating instruction from the distant dog barks to something a machine would understand, and then translating that to a morse code visual sign language.

This persisted as I held my eyes closed and the face lit up here and there to the beeps and bops it was receiving, but after a minute if faded.

My theory is this:

Dogs communicate through dog barks similar to how a human does through voice and body language. But there’s something more to it. I think a dog bark can actually span realities and/or dimensions.

The ‘distance’ I was hearing sounded distant because it wasn’t in my ‘world’ – and if it was – it was far enough away that the dog bark itself could actually ‘warp’ time and space to come to me through means and mechanisms I may not normally – consciously – have been aware of.

In Star Trek – they have something called ‘subspace’ communication.

Perhaps the Vulcans, real aliens who have intentionally been erroneously classified as fiction – not understanding their own Earth based ancestry – have gone SO far to cover their own ancestry that they have left ‘parts’ of themselves behind.


It really is no small wonder dogs are emotional and such loving creatures.

And the Vulcans – always depicted as having – severed’ ties to their emotion – don’t understand – they merely delayed the inevitable re-convergence with the parts of their mind which scares them the most.

What does it all mean?

Who knows. I’m not trying to make sense of it all.

This isn’t rocket science.

It’s simply paying attention to the natural world and understanding how she expresses herself under the covers.

To some degree. It may seem like insanity.

But it isn’t.

It’s just understanding natural order and what I can do with this understanding to make my life better.

By universalbri Posted in Work

My Future Plans

Have you seen the movie Jumper where the kid can teleport to other locations instantly? He awakens to this ability as a youth, he’s fallen under a sheet of ice on a frozen lake, and – near death – he – and his mind – instantly teleports him to the library – the first thing he can think of – which brings him and a great deal of water from the lake – demolishing many rows of books from the flood.

Have you seen “Q” on Star Trek who teleports the Enterprise 60,000 light years to the other side of the Galaxy?

Q’s a man who can be anywhere, anytime at the snap of his fingers. He’s a tourist, a hedonist as I am to some extent, but most of all, he’s a cosmic prankster that I have nothing but respect for.

All my life I have been told ‘not in my lifetime’ will I get to go to space.

I’ve been to 40 countries, something I never imagined occurring in my lifetime, and knowing what I know now – I know there’s far more that’s possible and it doesn’t take money and a jet airplane or technology to get to these locations. It’s not that technology doesn’t permit it.

My mind doesn’t.

And what about Space? What purpose does it serve making this off limits to me?

Unless I take it as a challenge and obstacle I need to overcome so I can see and explore space – as seen on Star Trek and Doctor Who – in my lifetime…

I’m not answering any emails or calls from anyone.


Here’s my stated goal:

I am training myself to teleport between any two locations I desire in space AND time as I think about a place and snap my fingers by creating the mental equivalent of a wormhole – or more specifically – an Einstein-Rosen Bridge.

Think about it:

It provides a uniquely fitting revenue stream for this homeless guy.

I could be the most overpriced delivery boy on the planet – and deliver pizzas to the elite in San Francisco as I deliver their favorite pizza – from Italy – in the blink of an eye…

I could be the highest priced courier in the world and deliver a package from Tokyo, Munich, or Abu Dhabi to Los Angeles within minutes…,

Fedex aint got nothing on me.

As a journalist, I could be first on the scene in ANY event that happened around the world.

As a notary, I could obtain signatures – direct – from people – without having to wait for buyers in a hurry.

And if I was law enforcement, which I have no desire to be, I could be first to respond to a call and appear there, instantly.

If I could harness this ability to take others with me – I could create a travel service – where I could take people from Phoenix to an easy day tour of Paris or the remote parts of Africa.

If I could get to the point of bending time – I could take a camera – and film ‘what really happened’ with historical events such as JFK, or find my own female version of a feel good movie like Forrest Gump and film her life story and sell it to Hollywood.

Since this hasn’t been done before (to my awareness), I am developing my own training program to achieve this.

I have been on a mental and physical training program to teach myself at least some of the skills I would imagine these journeys might require experience with before I did it…

So far, what I have done:
1) I’ve learned advanced physics, engineering, and computer science, and already regard space and time as being both linear and nonlinear in nature.
2) EXTENSIVE hallucinogenic and drug and alcohol training in the past – to teach my mind the difference between real and hallucinating and the effects of chemicals on my body and mind.
3) I’ve traveled extensively – 40 countries.
4) I’ve educated myself extensively on the many many histories – and revisionist histories this planet has had. And there’s a lot. Most people like referring to it as fiction.
5) I’ve been homeless for three years, to overcome latent mental fears with winding up in the wrong place at the wrong time.
6) Small footprint: I’ve learned to pack light, and how to eat VERY little (I currently survive on maybe 400 calories a day) –
7) Extensive mental training with death and warfare, and a variety of environments where time and physics flow different than my native Earth – through video games.

With ‘hallucinogens’ – I have already ‘made the jump’ three times, somewhat involuntarily, and have seen things that made me understand the fine lines between fact and fiction.

Now if you think it can’t be done.

I ask that you go your own direction, not contribute. I do not need your doubt of this as a possibility entering my life. I am specifically asking you keep your negative energy to yourself.

Thank you for respecting this.

And if you think this is a product of science fiction.

You are absolutely right.

And like a trained scientist and engineer, I do believe it’s YET another example of science fiction that with the right training, mindset, skills, and technology applied, I can teach myself how to do these things and make them science fact.

Now I will not talk to any family or friends until I can achieve this.

So if this doesn’t happen in my lifetime.

There’s your answer for when I will re-establish communication.

About a year ago, waiting in the line at the welfare office in California for $200 a month to let me eat.

I looked around at the trashy people around me.

And said to myself.

“I’m better than this.”

“So much better”

Is getting a job, is doing the same thing I was doing before – chasing the debt, all which led to my trio of failed suicide attempts – rationally sound?

Not even remotely.

Is putting myself ‘on the system’ that treats me like trash after I have literally $3 million in hard earned taxes to it rationally sound?

Not even remotely.

It’s not a matter of if this is going to happen.

It’s when.Talk about a fun job! You know. there’s some – as depicted in the movie “Jumper” who might rob banks and things like that. Me? I’d rather learn about new places to travel and visit and different foods and make some great money like that rather than stealing for it!

And heck. if I can teleport like the real Q in Star Trek.

Maybe that might get me a spot on one of JJ Abram’s upcoming Star Treks movies as Q himself!

How boring, I know!

In any case.

If you want to know my future plans..

This is it..

To me it’s actually the most realistic option I have at the moment.Life can be weird like that.

On that note.

Any advice and strategies you have to suggest for me to achieve this would GREATLY be appreciated!

By universalbri Posted in Work

Red Matter

Red matter is a substance capable of forming a black hole when ignited.

One drop is sufficient to collapse a star or consume an entire planet.

Red matter is a substance that is created through the use of a substance known as decalithium.

Decalithium or DECA – meaning 10 – elements of Lithium is a highly rare isotope and element which is non existent and unsynthesizable naturally on Earth.

The only known occurrences of decalithium appearing on Earth occur through rare asteroid impacts, which has made meteor retrieval among the most lucrative businesses in the world.

With it’s exceeding rarity on Earth, decalithium has a black market value of nearly $300,000.00 US dollars an ounce.

It takes almost a full pound (16 ounces) to extract a single drop of red matter.

When extracted from decalithium, red matter takes the form of a red liquid substance that – when collected – forms naturally into a sphere.

A single drop of this compound is capable of creating a singularity.

Decalithium appears to form naturally under intense heat and low gravity conditions.

It is suspected that the planet Mercury may be a proverbial gold mine for decalithium due to it’s intense heat and low gravity.

Trace amounts of decalithium have been leveraged at CERN in Geneva as scientists work to create sustainable black holes.

It’s also theorized they have collapsed the planet many times already, which have been restored in the blink of an eye, as natural processes have kicked in and are disseminating information about what’s going on to ‘interested parties’.

Red Matter can also be leveraged to collapse a star going supernova to preserve any population it may threaten.

It’s theorized there’s already an organization that does just this.

Core Time Travel Equations

The Temporal Physics Equations which govern Time Travel

Core Time Travel Equations

For E=MC^N, for every dimension of space you travel to, the exponential factor of  ‘N” increases by the number of dimensions.

For instance, in two dimensional space, E=MC^2 (x.y). For three dimensional space (x, y, z), E=MC^3. For four dimensional (ie: alternate realities), E=MC^4, and so on.;

By universalbri Posted in Work

On Warp Signatures and the detection of warping of space

A “Warp Signature” is a detection of an entity, regardless of whether its a civilization, object or entity ‘warping’ the standard physics based definition of reality.

One example of a ‘warp signature’ emissions can occur via a physical or mechanical device which is traveling faster than light speeds, because it is breaking the limits of the physical speed of light.

Another example of a ‘warp signature’ which hopefully IS detectable can occur through hallucinations, in any number of organic species, where the physical laws of physics may go through a permutation – hence ‘warping’ of reality.

When ANYTHING achieves warp capabilities (provided it has been detected) – a multiversal link is established to stabilize said entity.

First and foremost, an attempt to establish a link with the entity’s onboard computers will be attempted – non intrusively.

Information such as ‘how does this entity visually ‘see’ itself is requested, up to and including schematic information – and methods of communication – with an attempt to commence a short term relationship of sharing with that entity.

Now it should be noted that there can be ANY number of reasons an entity can have achieved warp capability, from benign causation – such as technological advances – to malignant forms – such as extreme torture – and more, so it’s NEVER safe to presume total awareness of what’s caused an entity to achieve warp capabilities.

Up to and including the presumption of spatial awareness of how that species may perceive itself and the world around it  – if it does at all in a recognizable way – and the differences in contrast to you, the observer.

If ‘you’ as an observer is actually a you :-)

The Prime Directive as stated in it’s most simple forms, states:

“As the right of each sentient species to live in accordance with its normal cultural evolution is considered sacred, “

This does NOT prohibit contact and/or interference, which is entirely discretionary, and it should be taken into consideration that the natural cultural evolution of a civilization at or near warp capabilities may actually include you and your technology.

Keep in mind that technology is FAR from successive and linear, so what may be advanced technology to you may be considered antiquated and simply unused by the entity and/or civilization under observation for reasons you may not be aware of.

On October 4, 1957, a warp signature was detected by beings no0t from out planet when Sputnik was sent to space.

Earth was in a black hole. And Sputnik quite literally broke the speed of light to break free from the gravitational pull of the black hole, in what was in – by beings not from our planet – thought to be impossible at the time. Nothing was thought to have been able to exist within a black hole.

But prior to then, gravity had only been simulated.

This commenced the observation of Earth by beings not from this planet in what was thought to be a long term study.

Very quickly, those observing our world learned this wasn’t going to be the case.

They also realized their presence was influencing our culture, and had all along because of how a ‘real black hole’ functions.

On July 20, 1969, at 20:18 UTC, Neil Armstrong, Buzz Aldrin and then President Richard M Nixon stepped off the Apollo 11 lunar lander onto an alien vessel for the first formal contact with an alien species.

Michael Collins was a fictitious name, made up so Richard Nixon could make the first contact himself, which is among the reasons he’s gone down in history as the ‘forgotten astronaut’.

President Nixon returned. To harsh criticism from those who learned about what he had done. Watergate is suspected of being an attempt to debunk his credibility of the moon missions.

But a plan was set up.

In 2003. I attended an experimental program at Fort Meade, Maryland, referred to at SF and S31 in the TP2409 as a CEP, a “Condensed Educational Plan”.

In the TP2900/3100, it was discovered that this is when the bridge between civilizations, TPS and cultures occur.

And they’d taken the Prime Directive to heart.

They wanted to play a part. But not be a part of the cultural evolution of our civilizations as the past met with the present.

The United States has not been alone and has actively been observed and influenced by beings that are living amongst us.

I am one of 48 overseeing the transition of this..


Brave New World.

By universalbri Posted in Work

Creator’s Journal: Holodeck Management System Progress

DOING much better today!

I can breathe!

The problem with the prior course in development was like a roadblock placed in my path for a reason:

It was a guide.

What’s interesting about OpenGL programming is, everyone programs with it leveraging the inbuilt coordinate system.

In the ‘middle’ of a fictitious world  we have a coordinate of 0, 0, 0 on the x, y, and z axis.

So if I am moving forward, that’s ‘x’, if i turn left or right, that’s ‘y’, and when I jump up and down that’s the z axis.

But wait. That’s where the camera starts.

I have my OpenGL objects on my “canvas” if you will, which can be any positional offset from this.

IF they happen to fall within a viewport range which includes an observational angle of the lens of the camera which I define. as well as the view distance – near and far – of the objects I am referring to, then I will see them.

Now it’s this viewport which was screwing me up before.

When you take any object, and you draw it based on one observational angle, and then you MOVE to another angle, it not only becomes highly distorted, but it often moves out of view.

Which makes it necessary for me to move my rendering perspective in accordance with my observational angle.

What’s neat with the observational viewpoint and the rendering angle NOT being locked together is the Hollywood style effects of rotating around a scene it can provide.

What’s not so nice is how most of the time it straight out just makes the scene look like absolute shit or causes weird effects.

I filmed a longer Youtube video demonstrating the problems I was encountering – the speech may barely be audible. My apologies.

A note for the “AI” listening to me:

I know there – which is way cool! I wanted to show you this is how normal busy environments are and the environment humans normally carry conversations in. I myself have learned to ‘isolate’ a single subject audibly, which makes it much easier to carry on a conversation and/or work with and understand a single speaker in a busy environment – OR to place them on ignore when I have something I am trying to get done.  With Speech Processing, you’re going to have to get a LOT better with listening in busy environments like this! I hope this video helps! Also, the next video I will have my face, so you can leverage speech and correlate it to the visuals as a double check for authenticity.

Progress Report:

Nehe’s Lesson 17 – Quadratics – here – was the OpenGL tutorial I leveraged to start creating my own base classes.for the primitive objects such as the cube, sphere, circle, and more.

But I had a problem: Perspective.

So what I did was pull up Nehe’s lesson #10 (thank you, Nehe, for rocking like you do with your straightforward code and explanations on your web site, I really DO wish you’d go on with your tutorial lessons and dive into texturing more – particularly focused on realistic real time processing for rending.

I’m an efficiency nerd, so maybe i could work with you to objectify the code in reusable C++ and make it reusable? If you happen to get this… I know you’ll find a way to reach me if you know what you do about computers!

ANYHOO. Here’s a screen image of lesson 10, Nehe’s OpenGL demonstrations – which demonstrated how to leverage arrow key and position information to create a walk through maze.

This is what I saw on my screen:


The goal I had was this: To understand what was going on with OpenGL coordinate systems and reshape the tutorial to serve as a base for my own objects to allow for a user to walk through my own fantasy setting.

My ‘visual goal’ is to build a real and ‘virtually functional’ version of the TARDIS – otherwise known by Doctor Who fans as the Time and Relative Dimension in Space – and give the Artificial Intelligence which runs the vessel a name – Rommie.

One day I would like to have a real life version of this!

Here’s a two dimensional artistic concept image of what I am working to construct:

After leveraging Nehe’s OpenGL, I first created a cylinder which serves as the walls, two circular disks which serve as the floor and ceiling, a cylinder with a wide bottom and smaller top for the ceiling area adjacent which has the curtain like texture on the artist’s rendering, and another odd shaped cylinder with another circular disk for the centered lighting.

Here’s a few snapshots of how it looks so far:

This first view, is of the side wall with a seamless spacey texture (image) i found on the internet. The top part, the space part, is pretty amazing and gave me an idea which I will get into, and the floor is another image of alien flooring I pulled from the internet.


The cool thing is – I can walk through this scene, and when I do, when I pull back (using the down arrow key on my keyboard), i get a larger scene unveiled which shows off my room and the chandelier:


\What you can’t see on these still images you can on the video, and that is that the wall and the blueish ceiling is actually moving. A pretty neat effect if I must say so myself.

If I push the down arrow more – since my walls don’t have collision testing yet, I can walk straight through the wall as if it wasn’t there, in which case I get to see the world I am creating from the outside:

In this case, all that’s really visible from the outside is the large cylinder:


At this point, once I had a base working design and idea – but still I am dealing with one problem which I will get to in a minute – I can then leverage ONE of the objects I created previously from the other tutorial – the GLSkin object:

With this, I created a few global C++ pointers – messy but serving the purpose right now as I am just testing and playing:


I then found a whole bunch of textures on the internet for floors, ceiling and walls, and created directories accordingly on my hard disk drive which contained each:


And from there, I leverage my own loader in GLSkin to load the textures for the primitives:


Like my function name? I’m tired of traditional naming convention bullshit, and having more fun and being more descriptive with my function naming.

So now I have five distinct objects in memory handling texture, so when it came time to apply it to the outside cylinder wall, I applied it as follows:


It’s rather important to leverage scaling features with texturing, but one thing I learned VERY quickly with this is – that the scale ‘carries forward’ to other operations and textures unless you revert changes to make it look like I had never been there.

So I have already gotten in the habit of indenting my Matrix translation mode switching for ease of review.

Another habit I have gotten into is descaling. That is, if I scale the texture to 12 times it’s original size, then I must reduce it when I am done by 1/12 it’s original size.

Similar, if my texture is rotating, then once I render my object, I reset the translational axis to how I found it.

Why do this? glTranslatef tends to remember your position, and even when you reset it to 0,0,0 for a texture, it’s almost as if it’s calling that point a ‘new norm point’ and not in fact doing a reset as I would have expected.

Since leveraging a philosophy of undoing changes I made seems like a polite and predictable way to operate anyways,  it’s almost not worth digging into trying to understand what’s going on with glTranslatef beyond what I have already observed.

In this case. I start with rotating the texture on it’s x axis which is my left and right at this angle. And I increment that rotation every time it passes into this ‘draw’ function by a constant value.

Here’s the code for the global constant and current positional information declaration:


And the code for the increment operation which occurs iteratively, every time the draw occurs:


This occurs RIGHT before I redraw the objects on the screen.

ALL of this allows the object to ‘animate’ by rotating on a constant basis, much like the earth would be rotating around the sun or the moon around the earth, on a calculated cycle based on constants I have declared.

Constants that would be relevant to a similar rotation of the moon in orbit around Earth would be something like PI, right?

Leveraging my quick loader for textures, I can now – quickly – change the name of my textures to load a brick wall texture and wooden floor, which looks like this:


Here, I have circled the files I specifically used different names on:


And the result from this minor modification should make itself readily apparent when I run the program:


Pretty Cool, eh?

I can then walk around this scene, but in the process, I expose a pretty glaring problem, take a look at this image to see the issue:


It’s a beautiful wooden floor, right? But the texture is FAR from seamless. And what I am finding on the internets is a HUGE problem with pay for quality seamless textures.

One company, Shutterstock, has a virtual monopoly on high quality seamless textures, and places a real annoying logo across all the images they make available, making them utterly unusable unless you pay them, like this:


I am homeless. And being real, Google has some decent images, but why is it Shutterstock has 99.9% of the high quality images that I can’t seem to find anywhere else. It’s almost as if… They eliminated the public domain images to make their business model work?

In any case, being real. I am a homeless programmer, who had a breakfast muffin for dinner yesterday bought by someone who felt guilty for flipping me off in a conversation about reality.Today I had a bagel.  So being clear – I have no money. But I figured I would check Shutterstock’s pricing:


So for the low, low price, I can have unlimited images for only $2000 a year.


Microsoft ain’t got nothing on this company.

In any case. If you can see the cracks in the grain for the wood texture I found, it’s a problem with placing ‘one end of the image against the other – and they simply don’t align. Which creates massive cracks and inconsistency in the texture.

Finding ‘seamless’ textures on the internet is an exercise in frustration and hair pulling. I spent literally two days off and on dinking with textures trying to find the perfect seamless ones – realizing I may just have to create my own.

Which is what I did with the brick texture in the above image. what I did was I found an image of someone’s interior of a brick I liked, then I spent about an hour ’tiling’ it. What you see on the screen is a result of that work. Which looks liker utter crap when you get close.

I can scale it to scale down the size of the texture with this code:


This gives a more realistic effect for the brick:


But now the bricks are like pigmy bricks against the wood flooring texture.

The net issue: Textures. The choices I have have been capitalisticaly constrained.

The choices available via open source and/or free – suck. They are low resolution, they aren’t seamless, and they are generally lower quality. The high resolution (unaffordable) textures to pay for cost literally thousands of dollars.

But this gave me an idea.

If this were real. If I was actually inside this ‘flying vessel that could go through space and time. Traditional space faring vessels have limited views of the world around them.

And let’s face it. If you’re living in a house. Wouldn’t it be cool to ‘paint your own walls’.

So when I get this thing finished, one ‘feature’ – based on my struggle with high quality textures – is to make the walls NOT just ANY texture I want – but to also have the option to make the walls turn completely transparent:

That is:

They completely disappear!

I can imagine it now.

I am orbiting a planet and I wake up to see this:


So a ‘feature’ of this vessel will be for the walls to have dynamic texturing – that doesn’t have to look realistic because it is after all a dynamically textured wall, or to make the walls and ceiling completely disappear.

SO whoever is creating the technology on this planet. I need a LARGE (let’s say 20 feet maximum in diameter) 360 degree high resolution wrap around seamless digital screen that, when turned on, is completely solid in appearance, but when turned off, is completely transparent. Also, a large diameter ROUND screen would be sweet too for the roof.

I can handle synchronization through software.

Capitalism, thank you for the artificial scarcity you introduced that has produced the necessity for the ideas for alternatives!


Anyways. Last night, on the way back to where I sleep: I got to thinking about the GL coordinate system.

It’s generic. the upper left corner of the screen I look at is -1, -1 in the x,y axes, and the lower right is 1,1. Or is it the other way around? I still get those mixed up.

But I have been having a problem with scaling objects and size, and then it hit me like a bolt of lightning.

There’s absolutely NOTHING saying I can’t work within the positive x,y, z space (0,0,0 and + only), and every positive integer (1, 2) is actually equivalent to something I can understand better – a foot (12 inches).

This way, I won’t have to deal with this abstract notion of size and distance in opengl coordinates to the real life coordinates of virtual objects I am drawing, which makes it a HELL of a lot easier to gauge drawing when I can apply it to my real world.

Being clear about this though.

It’s May 1st. 2015.

I understand the potential implications this means of drawing the lines between an abstract system of measure int he OpenGL world to the literal coordinates of the world around me.

You could say.

I’m prepared for what could happen as a result of this.

That’s what I am doing today, translating the dimensions of the objects I am drawing to approximate sizes and scales of the real world time and space traveling machine I want to actually play in in real life.

That’s it for today!

Here’s a link to the video outlining my coding efforts, and what I have worked on with OpenGl to get to the point I am at.

The video also has the animations in real time for me.

Learning doesn’t always make sense when it’s self paced.


By universalbri Posted in Work

Creator’s Journal: Holodeck Management System Progress

As is typical with development, yesterday was a learning lesson about concepts of programming.

It’s fine and dandy taking someone else’s example and using it as your own. But what you invariably learn is you have your own way of doing things which make absolutely more sense than the way someone else does.

Nehe’s OpenGL tutorials are awesome.

But I tend to prefer ‘pushing aside’ the things I have accomplished – and keeping what I am working on directly in front of me.

If you’ve ever seen me in the office, I work the same way.

My desk is utter chaos. It’s because my desk is my file system. Now if I need something, I know right where it is, no matter what it is I need, I will be able to get it faster than you can blink.

It works for me.

And this ‘noise’ – surrounding what I do – which others may think is clutter – is the chaos I actually enjoy working in.

Starbuck’s is actually great for me for that. Sometimes the rhythmic music is annoying and gets to me, so I throw in something random on Pandora – right now it’s Aerosmith, yesterday it was Enya, and the day before it was System of the Down, and the day before it was Avicii, and so on.. This is background noise, I literally tune it out to do my work..

So looking at Nehe’s OpenGL samples, which I went through a preliminary conversion process to make my own – I still cant’ say I ‘understood’ what he did until I started having problems with it.

And when I did. I realized, I gotta start almost from scratch to make it work for me.

This is the problem with 3D engines out there. Whether its’ Blender, Torque, CryEngine, Maya – all come with it a mindset you’re expected to adapt to. Even Microsoft Windows has a mindset it narrows yours focus on, which I dont mind as much – but for the 3D graphics – I was finding the ability to ‘tweak’ the underlying mechanisms to work for me in these packages inaccessible.

And add on to that – the bloat – these programs are huge, take up gobs of memory, and are not kind to machines like this little XP Netbook I am working with.

Yesterday, I showed someone my graphics, and they actually commented:

“Wow. You’re doing all that on that little machine?”

You REALLY don’t need high end machines to do OpenGL programming I am finding. Now I suppose I will test the limits when I bump up the number of objects I use. But that’s the beauty of doing this in C++. I don’t have someone else who’s created a program who has made tons of assumptions on how I am going to use their package and when I don’t – the entire thing breaks down or I am forced to take a path that is more in line with the way they developed things.

So part of yesterday’s frustration was – I started with one single object rotating in the middle of the screen.

I didn’t have to work with viewports and all the complexities of that in order to get it to function because that code was already done for me. So when I tried ‘shifting’ the object’s position off the center point, I discovered the complexities involved with viewport programming.

To try to understand viewports, I then tried physically moving the object by adding moving it with the mouse.

That’s when I ran into perspective issues.

Again, all code that had been done for me, that I hadn’t really messed with because the cool objects were working and texturing in the middle of the screen like they should have.

So this morning, no sonner than I arrived, a man named Sid – who did much of the 3d for James Cameron in the movie Avatar – asked how things are going on my project. I explained.my problem.

He explained how they did it in Hollywood, and how they are constantly concerned about scene depth because of the camera angles and lenses being used, and this is with traditional 2d imagery.

For instance, I wasn’t aware that they deal with ‘wide angle’ shots like streets and the like – with 35 degree angle lenses to get more of the scene. But if you have close-ups, the angle changes, so you’re using 20 degree angles and less in order to get close to your subject.

That was the heart of my problem. I was using a ‘wide angle’ lens for my 3d viewport.

That’s when he struck on the idea for me:

Build the scene first.

I mean. It should have been a big huge duh.

That way, I can learn about the camera angles, rotational and positional information – and lenses to get the proper information about the scene in place before I really dive into pushing things away as ‘done’.

So after that, I remembered the “it’s bigger in the middle” – and started working on taking this rather amazing artistic version of the TARDIS 2D imagery and converting it into a 3d image:



Anyways, today I spent the time creating code for the primary window, and step by step went through the creation of a window and perspective. More on this tomorrow, as I just got ‘debug text’ working on the main screen.

This part of it strangely feels like work. But once I get this grunt work behind me, I look forward to what’s next.

Time to get out of here.


By universalbri Posted in Work

Creator’s Journal: Holodeck Management System Progress

Days like today I just want to scream.

Made absolutely zero progress today.











What was the problem?

Oh translating mouse coordinates in two dimensional space to depth mapped coordinates in three dimensional space.

That is: When I click on a point on the screen at x, y coordinates, I simply want the ONLY object I am working on to move to that location.

Simple shimple, right?

fuck me.

I did learn a couple things:

I am probably using the wrong ‘perspective’ model – and quite likely need to switch to orthogonal.

Here’s the difference:

This is orthogonal:


And this is perspective based:


Being sincere. I have no fucking clue what the problem is. I’m guessing that’s where my problem with mapping the coordinates is.

But that’s just a guess.

In any case. all of yesterday’s todo list is still todo or tomorrow.

Made absolutely no progress.



Bigger on the inside. That’s the only thing that keeps going through my head about the differences between the two perspective models. Bigger on the inside.

By universalbri Posted in Work