MyEarths

On Warp Signatures and the detection of warping of space

A “Warp Signature” is a detection of an entity, regardless of whether its a civilization, object or entity ‘warping’ the standard physics based definition of reality.

One example of a ‘warp signature’ emissions can occur via a physical or mechanical device which is traveling faster than light speeds, because it is breaking the limits of the physical speed of light.

Another example of a ‘warp signature’ which hopefully IS detectable can occur through hallucinations, in any number of organic species, where the physical laws of physics may go through a permutation – hence ‘warping’ of reality.

When ANYTHING achieves warp capabilities (provided it has been detected) – a multiversal link is established to stabilize said entity.

First and foremost, an attempt to establish a link with the entity’s onboard computers will be attempted – non intrusively.

Information such as ‘how does this entity visually ‘see’ itself is requested, up to and including schematic information – and methods of communication – with an attempt to commence a short term relationship of sharing with that entity.

Now it should be noted that there can be ANY number of reasons an entity can have achieved warp capability, from benign causation – such as technological advances – to malignant forms – such as extreme torture – and more, so it’s NEVER safe to presume total awareness of what’s caused an entity to achieve warp capabilities.

Up to and including the presumption of spatial awareness of how that species may perceive itself and the world around it  – if it does at all in a recognizable way – and the differences in contrast to you, the observer.

If ‘you’ as an observer is actually a you :-)

The Prime Directive as stated in it’s most simple forms, states:

“As the right of each sentient species to live in accordance with its normal cultural evolution is considered sacred, “

This does NOT prohibit contact and/or interference, which is entirely discretionary, and it should be taken into consideration that the natural cultural evolution of a civilization at or near warp capabilities may actually include you and your technology.

Keep in mind that technology is FAR from successive and linear, so what may be advanced technology to you may be considered antiquated and simply unused by the entity and/or civilization under observation for reasons you may not be aware of.

On October 4, 1957, a warp signature was detected by beings no0t from out planet when Sputnik was sent to space.

Earth was in a black hole. And Sputnik quite literally broke the speed of light to break free from the gravitational pull of the black hole, in what was in – by beings not from our planet – thought to be impossible at the time. Nothing was thought to have been able to exist within a black hole.

But prior to then, gravity had only been simulated.

This commenced the observation of Earth by beings not from this planet in what was thought to be a long term study.

Very quickly, those observing our world learned this wasn’t going to be the case.

They also realized their presence was influencing our culture, and had all along because of how a ‘real black hole’ functions.

On July 20, 1969, at 20:18 UTC, Neil Armstrong, Buzz Aldrin and then President Richard M Nixon stepped off the Apollo 11 lunar lander onto an alien vessel for the first formal contact with an alien species.

Michael Collins was a fictitious name, made up so Richard Nixon could make the first contact himself, which is among the reasons he’s gone down in history as the ‘forgotten astronaut’.

President Nixon returned. To harsh criticism from those who learned about what he had done. Watergate is suspected of being an attempt to debunk his credibility of the moon missions.

But a plan was set up.

In 2003. I attended an experimental program at Fort Meade, Maryland, referred to at SF and S31 in the TP2409 as a CEP, a “Condensed Educational Plan”.

In the TP2900/3100, it was discovered that this is when the bridge between civilizations, TPS and cultures occur.

And they’d taken the Prime Directive to heart.

They wanted to play a part. But not be a part of the cultural evolution of our civilizations as the past met with the present.

The United States has not been alone and has actively been observed and influenced by beings that are living amongst us.

I am one of 48 overseeing the transition of this..

our…

Brave New World.

By universalbri Posted in Work
OneView

Creator’s Journal: Holodeck Management System Progress

DOING much better today!

I can breathe!

The problem with the prior course in development was like a roadblock placed in my path for a reason:

It was a guide.

What’s interesting about OpenGL programming is, everyone programs with it leveraging the inbuilt coordinate system.

In the ‘middle’ of a fictitious world  we have a coordinate of 0, 0, 0 on the x, y, and z axis.

So if I am moving forward, that’s ‘x’, if i turn left or right, that’s ‘y’, and when I jump up and down that’s the z axis.

But wait. That’s where the camera starts.

I have my OpenGL objects on my “canvas” if you will, which can be any positional offset from this.

IF they happen to fall within a viewport range which includes an observational angle of the lens of the camera which I define. as well as the view distance – near and far – of the objects I am referring to, then I will see them.

Now it’s this viewport which was screwing me up before.

When you take any object, and you draw it based on one observational angle, and then you MOVE to another angle, it not only becomes highly distorted, but it often moves out of view.

Which makes it necessary for me to move my rendering perspective in accordance with my observational angle.

What’s neat with the observational viewpoint and the rendering angle NOT being locked together is the Hollywood style effects of rotating around a scene it can provide.

What’s not so nice is how most of the time it straight out just makes the scene look like absolute shit or causes weird effects.

I filmed a longer Youtube video demonstrating the problems I was encountering – the speech may barely be audible. My apologies.

A note for the “AI” listening to me:

I know there – which is way cool! I wanted to show you this is how normal busy environments are and the environment humans normally carry conversations in. I myself have learned to ‘isolate’ a single subject audibly, which makes it much easier to carry on a conversation and/or work with and understand a single speaker in a busy environment – OR to place them on ignore when I have something I am trying to get done.  With Speech Processing, you’re going to have to get a LOT better with listening in busy environments like this! I hope this video helps! Also, the next video I will have my face, so you can leverage speech and correlate it to the visuals as a double check for authenticity.

Progress Report:

Nehe’s Lesson 17 – Quadratics – here – was the OpenGL tutorial I leveraged to start creating my own base classes.for the primitive objects such as the cube, sphere, circle, and more.

But I had a problem: Perspective.

So what I did was pull up Nehe’s lesson #10 (thank you, Nehe, for rocking like you do with your straightforward code and explanations on your web site, I really DO wish you’d go on with your tutorial lessons and dive into texturing more – particularly focused on realistic real time processing for rending.

I’m an efficiency nerd, so maybe i could work with you to objectify the code in reusable C++ and make it reusable? If you happen to get this… I know you’ll find a way to reach me if you know what you do about computers!

ANYHOO. Here’s a screen image of lesson 10, Nehe’s OpenGL demonstrations – which demonstrated how to leverage arrow key and position information to create a walk through maze.

This is what I saw on my screen:

NEHE

The goal I had was this: To understand what was going on with OpenGL coordinate systems and reshape the tutorial to serve as a base for my own objects to allow for a user to walk through my own fantasy setting.

My ‘visual goal’ is to build a real and ‘virtually functional’ version of the TARDIS – otherwise known by Doctor Who fans as the Time and Relative Dimension in Space – and give the Artificial Intelligence which runs the vessel a name – Rommie.

One day I would like to have a real life version of this!

Here’s a two dimensional artistic concept image of what I am working to construct:
spt

After leveraging Nehe’s OpenGL, I first created a cylinder which serves as the walls, two circular disks which serve as the floor and ceiling, a cylinder with a wide bottom and smaller top for the ceiling area adjacent which has the curtain like texture on the artist’s rendering, and another odd shaped cylinder with another circular disk for the centered lighting.

Here’s a few snapshots of how it looks so far:

This first view, is of the side wall with a seamless spacey texture (image) i found on the internet. The top part, the space part, is pretty amazing and gave me an idea which I will get into, and the floor is another image of alien flooring I pulled from the internet.

ZeroView

The cool thing is – I can walk through this scene, and when I do, when I pull back (using the down arrow key on my keyboard), i get a larger scene unveiled which shows off my room and the chandelier:

OneView

\What you can’t see on these still images you can on the video, and that is that the wall and the blueish ceiling is actually moving. A pretty neat effect if I must say so myself.

If I push the down arrow more – since my walls don’t have collision testing yet, I can walk straight through the wall as if it wasn’t there, in which case I get to see the world I am creating from the outside:

In this case, all that’s really visible from the outside is the large cylinder:

NoCollision

At this point, once I had a base working design and idea – but still I am dealing with one problem which I will get to in a minute – I can then leverage ONE of the objects I created previously from the other tutorial – the GLSkin object:

With this, I created a few global C++ pointers – messy but serving the purpose right now as I am just testing and playing:

MySkins

I then found a whole bunch of textures on the internet for floors, ceiling and walls, and created directories accordingly on my hard disk drive which contained each:

Directory

And from there, I leverage my own loader in GLSkin to load the textures for the primitives:

LoadingMySkins

Like my function name? I’m tired of traditional naming convention bullshit, and having more fun and being more descriptive with my function naming.

So now I have five distinct objects in memory handling texture, so when it came time to apply it to the outside cylinder wall, I applied it as follows:

CodeScreen

It’s rather important to leverage scaling features with texturing, but one thing I learned VERY quickly with this is – that the scale ‘carries forward’ to other operations and textures unless you revert changes to make it look like I had never been there.

So I have already gotten in the habit of indenting my Matrix translation mode switching for ease of review.

Another habit I have gotten into is descaling. That is, if I scale the texture to 12 times it’s original size, then I must reduce it when I am done by 1/12 it’s original size.

Similar, if my texture is rotating, then once I render my object, I reset the translational axis to how I found it.

Why do this? glTranslatef tends to remember your position, and even when you reset it to 0,0,0 for a texture, it’s almost as if it’s calling that point a ‘new norm point’ and not in fact doing a reset as I would have expected.

Since leveraging a philosophy of undoing changes I made seems like a polite and predictable way to operate anyways,  it’s almost not worth digging into trying to understand what’s going on with glTranslatef beyond what I have already observed.

In this case. I start with rotating the texture on it’s x axis which is my left and right at this angle. And I increment that rotation every time it passes into this ‘draw’ function by a constant value.

Here’s the code for the global constant and current positional information declaration:

Constants

And the code for the increment operation which occurs iteratively, every time the draw occurs:

Movement

This occurs RIGHT before I redraw the objects on the screen.

ALL of this allows the object to ‘animate’ by rotating on a constant basis, much like the earth would be rotating around the sun or the moon around the earth, on a calculated cycle based on constants I have declared.

Constants that would be relevant to a similar rotation of the moon in orbit around Earth would be something like PI, right?

Leveraging my quick loader for textures, I can now – quickly – change the name of my textures to load a brick wall texture and wooden floor, which looks like this:

FloorAndWall

Here, I have circled the files I specifically used different names on:

FloorAndWall2

And the result from this minor modification should make itself readily apparent when I run the program:

TwoView

Pretty Cool, eh?

I can then walk around this scene, but in the process, I expose a pretty glaring problem, take a look at this image to see the issue:

ThreeView

It’s a beautiful wooden floor, right? But the texture is FAR from seamless. And what I am finding on the internets is a HUGE problem with pay for quality seamless textures.

One company, Shutterstock, has a virtual monopoly on high quality seamless textures, and places a real annoying logo across all the images they make available, making them utterly unusable unless you pay them, like this:

shutterstock

I am homeless. And being real, Google has some decent images, but why is it Shutterstock has 99.9% of the high quality images that I can’t seem to find anywhere else. It’s almost as if… They eliminated the public domain images to make their business model work?

In any case, being real. I am a homeless programmer, who had a breakfast muffin for dinner yesterday bought by someone who felt guilty for flipping me off in a conversation about reality.Today I had a bagel.  So being clear – I have no money. But I figured I would check Shutterstock’s pricing:

ShutterstockPricing

So for the low, low price, I can have unlimited images for only $2000 a year.

LOL.

Microsoft ain’t got nothing on this company.

In any case. If you can see the cracks in the grain for the wood texture I found, it’s a problem with placing ‘one end of the image against the other – and they simply don’t align. Which creates massive cracks and inconsistency in the texture.

Finding ‘seamless’ textures on the internet is an exercise in frustration and hair pulling. I spent literally two days off and on dinking with textures trying to find the perfect seamless ones – realizing I may just have to create my own.

Which is what I did with the brick texture in the above image. what I did was I found an image of someone’s interior of a brick I liked, then I spent about an hour ’tiling’ it. What you see on the screen is a result of that work. Which looks liker utter crap when you get close.

I can scale it to scale down the size of the texture with this code:

BrickScalingCode

This gives a more realistic effect for the brick:

BrickScaling

But now the bricks are like pigmy bricks against the wood flooring texture.

The net issue: Textures. The choices I have have been capitalisticaly constrained.

The choices available via open source and/or free – suck. They are low resolution, they aren’t seamless, and they are generally lower quality. The high resolution (unaffordable) textures to pay for cost literally thousands of dollars.

But this gave me an idea.

If this were real. If I was actually inside this ‘flying vessel that could go through space and time. Traditional space faring vessels have limited views of the world around them.

And let’s face it. If you’re living in a house. Wouldn’t it be cool to ‘paint your own walls’.

So when I get this thing finished, one ‘feature’ – based on my struggle with high quality textures – is to make the walls NOT just ANY texture I want – but to also have the option to make the walls turn completely transparent:

That is:

They completely disappear!

I can imagine it now.

I am orbiting a planet and I wake up to see this:

EarthnBa3ance

So a ‘feature’ of this vessel will be for the walls to have dynamic texturing – that doesn’t have to look realistic because it is after all a dynamically textured wall, or to make the walls and ceiling completely disappear.

SO whoever is creating the technology on this planet. I need a LARGE (let’s say 20 feet maximum in diameter) 360 degree high resolution wrap around seamless digital screen that, when turned on, is completely solid in appearance, but when turned off, is completely transparent. Also, a large diameter ROUND screen would be sweet too for the roof.

I can handle synchronization through software.

Capitalism, thank you for the artificial scarcity you introduced that has produced the necessity for the ideas for alternatives!

….

Anyways. Last night, on the way back to where I sleep: I got to thinking about the GL coordinate system.

It’s generic. the upper left corner of the screen I look at is -1, -1 in the x,y axes, and the lower right is 1,1. Or is it the other way around? I still get those mixed up.

But I have been having a problem with scaling objects and size, and then it hit me like a bolt of lightning.

There’s absolutely NOTHING saying I can’t work within the positive x,y, z space (0,0,0 and + only), and every positive integer (1, 2) is actually equivalent to something I can understand better – a foot (12 inches).

This way, I won’t have to deal with this abstract notion of size and distance in opengl coordinates to the real life coordinates of virtual objects I am drawing, which makes it a HELL of a lot easier to gauge drawing when I can apply it to my real world.

Being clear about this though.

It’s May 1st. 2015.

I understand the potential implications this means of drawing the lines between an abstract system of measure int he OpenGL world to the literal coordinates of the world around me.

You could say.

I’m prepared for what could happen as a result of this.

That’s what I am doing today, translating the dimensions of the objects I am drawing to approximate sizes and scales of the real world time and space traveling machine I want to actually play in in real life.

That’s it for today!

Here’s a link to the video outlining my coding efforts, and what I have worked on with OpenGl to get to the point I am at.

The video also has the animations in real time for me.

Learning doesn’t always make sense when it’s self paced.

 

By universalbri Posted in Work
spt

Creator’s Journal: Holodeck Management System Progress

As is typical with development, yesterday was a learning lesson about concepts of programming.

It’s fine and dandy taking someone else’s example and using it as your own. But what you invariably learn is you have your own way of doing things which make absolutely more sense than the way someone else does.

Nehe’s OpenGL tutorials are awesome.

But I tend to prefer ‘pushing aside’ the things I have accomplished – and keeping what I am working on directly in front of me.

If you’ve ever seen me in the office, I work the same way.

My desk is utter chaos. It’s because my desk is my file system. Now if I need something, I know right where it is, no matter what it is I need, I will be able to get it faster than you can blink.

It works for me.

And this ‘noise’ – surrounding what I do – which others may think is clutter – is the chaos I actually enjoy working in.

Starbuck’s is actually great for me for that. Sometimes the rhythmic music is annoying and gets to me, so I throw in something random on Pandora – right now it’s Aerosmith, yesterday it was Enya, and the day before it was System of the Down, and the day before it was Avicii, and so on.. This is background noise, I literally tune it out to do my work..

So looking at Nehe’s OpenGL samples, which I went through a preliminary conversion process to make my own – I still cant’ say I ‘understood’ what he did until I started having problems with it.

And when I did. I realized, I gotta start almost from scratch to make it work for me.

This is the problem with 3D engines out there. Whether its’ Blender, Torque, CryEngine, Maya – all come with it a mindset you’re expected to adapt to. Even Microsoft Windows has a mindset it narrows yours focus on, which I dont mind as much – but for the 3D graphics – I was finding the ability to ‘tweak’ the underlying mechanisms to work for me in these packages inaccessible.

And add on to that – the bloat – these programs are huge, take up gobs of memory, and are not kind to machines like this little XP Netbook I am working with.

Yesterday, I showed someone my graphics, and they actually commented:

“Wow. You’re doing all that on that little machine?”

You REALLY don’t need high end machines to do OpenGL programming I am finding. Now I suppose I will test the limits when I bump up the number of objects I use. But that’s the beauty of doing this in C++. I don’t have someone else who’s created a program who has made tons of assumptions on how I am going to use their package and when I don’t – the entire thing breaks down or I am forced to take a path that is more in line with the way they developed things.

So part of yesterday’s frustration was – I started with one single object rotating in the middle of the screen.

I didn’t have to work with viewports and all the complexities of that in order to get it to function because that code was already done for me. So when I tried ‘shifting’ the object’s position off the center point, I discovered the complexities involved with viewport programming.

To try to understand viewports, I then tried physically moving the object by adding moving it with the mouse.

That’s when I ran into perspective issues.

Again, all code that had been done for me, that I hadn’t really messed with because the cool objects were working and texturing in the middle of the screen like they should have.

So this morning, no sonner than I arrived, a man named Sid – who did much of the 3d for James Cameron in the movie Avatar – asked how things are going on my project. I explained.my problem.

He explained how they did it in Hollywood, and how they are constantly concerned about scene depth because of the camera angles and lenses being used, and this is with traditional 2d imagery.

For instance, I wasn’t aware that they deal with ‘wide angle’ shots like streets and the like – with 35 degree angle lenses to get more of the scene. But if you have close-ups, the angle changes, so you’re using 20 degree angles and less in order to get close to your subject.

That was the heart of my problem. I was using a ‘wide angle’ lens for my 3d viewport.

That’s when he struck on the idea for me:

Build the scene first.

I mean. It should have been a big huge duh.

That way, I can learn about the camera angles, rotational and positional information – and lenses to get the proper information about the scene in place before I really dive into pushing things away as ‘done’.

So after that, I remembered the “it’s bigger in the middle” – and started working on taking this rather amazing artistic version of the TARDIS 2D imagery and converting it into a 3d image:

spt

 

Anyways, today I spent the time creating code for the primary window, and step by step went through the creation of a window and perspective. More on this tomorrow, as I just got ‘debug text’ working on the main screen.

This part of it strangely feels like work. But once I get this grunt work behind me, I look forward to what’s next.

Time to get out of here.

 

By universalbri Posted in Work
orthoproj

Creator’s Journal: Holodeck Management System Progress

Days like today I just want to scream.

Made absolutely zero progress today.

Nada.

Nyet.

Nein.

Zero.

Zilch.

NADA.

0.

Nothing.

Zip.

Zilch.

What was the problem?

Oh translating mouse coordinates in two dimensional space to depth mapped coordinates in three dimensional space.

That is: When I click on a point on the screen at x, y coordinates, I simply want the ONLY object I am working on to move to that location.

Simple shimple, right?

fuck me.

I did learn a couple things:

I am probably using the wrong ‘perspective’ model – and quite likely need to switch to orthogonal.

Here’s the difference:

This is orthogonal:

orthoproj

And this is perspective based:

frustum1

Being sincere. I have no fucking clue what the problem is. I’m guessing that’s where my problem with mapping the coordinates is.

But that’s just a guess.

In any case. all of yesterday’s todo list is still todo or tomorrow.

Made absolutely no progress.

zero.

annoying.

Bigger on the inside. That’s the only thing that keeps going through my head about the differences between the two perspective models. Bigger on the inside.

By universalbri Posted in Work
url

Overcoming blindness

I was born blind, not just legally blind, but completely incapable of seeing anything.  Total darkness.

I really don’t have any memories of this, I just remember hearing about it and how quickly I gained the ability to see with normal vision At about three years old.

It was a rarely discussed topic in my household, kinda like ‘wow, that happened’, but with my visual memories only starting to be formed after the age of three,

One of my first memories with vision was when I was in kindergarten – and my class had a trip to the local petting zoo.

I actually remember the images in black and white – much like the newspaper article which had me in it petting a goat or a llama – i can’t remember which.

After that. My visual memory came in chunks. Like my mind, tasked with developing a world view – was trying to figure out sizes, shapes, and structure – and how to apply my imagination to the outside world.

I recently asked this question on yahoo answers, because my conceptual world view seems so much different than most people around me:

collectivemind

I’m going to restate precisely the same question I just placed in that image because I suspect some of you may not be able to ‘read’ or understand the above image. Here’s that SAME exact question restated in the SAME exact language I used:

In real life, when you walk into a city – say DC – is imagery I see shared in a collective mind with you which lets us see the same thing?
In real life, when you walk into a city – say DC – is imagery I see shared in a collective mind with you which lets us see the same thing? To be more specific – How do I know the object you’re looking at and referring to as an apple is the same definition of apple I have in my own mind? Are disorders such as schizophrenia and multiple personality disorder a mental ‘interpretational issue’ based on collective/consensus reality agreements of perception?

Looking back at my past – I can’t help but wonder – had the world view my mind developed become different than the reality of those who surrounded me?

Mysteriously, I had actually forgotten this weird factoid of my past – up until this year – when I actually remembered some things I had forgotten from my youth.

Which had me ask that question on Yahoo.

How are memories formed, and is there a collective mind formed for the ‘shared’ pooling of imagery and information for the majority of people?

Is blindness, particularly at such a young age, a gift or a curse?

And if this ‘collective’ exists, did this isolation detach me from that collective to form my own ideas and conceptions of the world around me that was starkly different than those around me?

In any case.

When I started to learn Chinese, I learned very quickly that my ‘westernized’ ears could literally not hear the differences between different Chinese vowels.

And then. with simple observation of Chinese people, it’s clear that they have huge pupils with no irises, and their eyes tend to be shut.

Now if you analogize this to a camera lens – this is like saying there’s an extra wide aperture with a fast shutter speed.

What’s that mean in plain English? Generally speaking, when the pupils are open larger, like an aperture for a camera , there’s more light let in. But if you counter balance this by decreasing the amount of time light is let into the lens, you get an equivalent image.

But that’s not the case here, as Chinese, in general, don’t blink (it’s weird – but watch them – they hardly ever blink) – and with their eyes closed – I am theorizing – this lets in MUCH LESS of the natural light that our western eyes can see.

Which suggests nothing more than Chinese see the world VERY VERY different than our western eyes do, so much so that they may have imagined it to be very different than ours – much like I imagined my reality for mine.

When I went to Beijing 4 years ago, I touched the hand of a woman and something was ‘exchanged’. It was like we instantly ‘exchanged world views’ at a touch.

And what I saw in my mind’s eye of this Chinese woman’s world was a visual that looked very similar to the Matrix code here:

url

Only with Chinese characters…

It was weird. In that one moment, my memories of my past started creeping back. And I started seeing the world through new ‘lenses’, and understanding that my perception may very well be unique.

It suddenly made sense – the Chinese language – with thousands of images as characters to memorize.

The image is of course from the movie “The Matrix”. which somehow, I believe our society had gotten a collective glimpse into the collective mindset of the Chinese culture, and what we saw scared us at the same time it amazed us.

There’s more truth to fiction than you know….

It’s a well known fact that James Cameron received his inspiration for the movie “Terminator” from a dream of killer robots carrying knives dragging itself across a kitchen floor.

Could that have been another instance where James Cameron’s mind ‘saw’ into the collective Italian mindset and their sordid past?

I’m of the belief that the real events of World War 2 have been disassociated from us to make the experience of humanity retain some semblance of linear consistency.

But I am also of the belief that millions of people, acting like robots killing millions of others because one man said so has a more rational explanation – such as:

Maybe our ancestors are all robots…

And maybe. Just maybe.

Our bodies. Engineered to precise specifications. Containing fundamental metallic elements such as Iron and Zinc, with nuclear powered mitochondria – are the product of billions upon billions of iterations of evolution.

And because it happened so long ago. And because we mentally wanted to escape our violent past.

We chose to fictionalize the stories to disassociate ourselves from that which we once were.

Robots.

What am I?

I know I can reprogram my own mind to be anything I want.

I know I once didn’t have sight and I either reprogrammed my vision and/or imagined it into existence.

Am I a robot? A cyborg? A real human?

Or a human in a Matrix simulation?

To be human I suspect is to be unsure of our own past and even what we are.

And be ok with that.

Because it all makes for wonderful stories.

By universalbri Posted in Work
Constants

Creator’s Journal: Holodeck Management System Progress

Dad, if you happen to hear and/or see and/or read and/or absorb what i am writing in any way, I have a great deal of respect of you and your chosen profession…

My father was a CAD designer. Computer Aided Drafting.

Drawing three dimensional objects on two dimensional space.

He started as a draftsman, doing it by hand, then he became good with it – really good in fact – and, like me, opted not to lead teams and stay doing what he did.

In any case. I am working with OpenGL, and I will tell you what, it’s a bitch thinking about the translation of real world coordinate systems into virtual coordinate systems and back.

A real mind fuck in fact.

The weird dreams I have had lately because of this..

Thinking.. in three dimensions.


Today’s been a rough day.

Take a couple steps forward.

And then a few backwards….

Off yesterday’s todo list:

1) Fix the 3D Sphere’s texture wrapping (done)

Turns out this was a problem with the soccer texture. Not so much a problem with the texture itself.

2) Fix the 3D segmented sphere (Done)

TexturedDisk

4) Check Memory allocation for all objects (Done)

This actually worked out pretty sweet. In the past, I had used Boundschecker by Numega which would provide you all kinds of information about memory leaks, and was crucial for corporate and industrial development.

Well AS it turns out. There’s some inbuilt functionality which can actually do the same thing with VERY few additions (and a lot less expense!).

I already had a global header I was using for a few constants like PI and the speed of light (I’m already using PI but not the other);

Constants

Two definitions for PI just to confuse people looking over my code:

And now that header file also has this markup for detecting memory leaks;

DebugCode

It’s real easy to use, when I am done with wrapping up my application and returning to the OS, I call this;

Memleaks
And while the output is cryptic when the program terminates, it either gives me this message:

NoProblemo

Or this one (I intentionally created a leak situation to demonstrate the detection):

MemoryProblem

Now while that output is cryptic, the line number pointed line 308:

mem2

segDiskMemProb

Now let’s see – did I delete the mySegmentedDisk object that I created a new one off?

noDelete

DOH! Clearly an example but this fixes the situation:

mem3

And a test run demonstrates this:

Noprob2

So if you’ve never programmed with C++ – every ‘new’ has to have an associated delete. It really is that simple. Every malloc a deallocation. And so on. So this nifty little feature catches when I forget to place this in my code.

In any case, immediately after placing this code in, I found one glaring leak – where I had ‘new”ed something and reused that same variable to new something else and lost reference to the original thing I new’ed. A sizable memory chunk consumed because of sloppy programming! Now fixed!

5) Implement inheritance use of positional information (Done)

This worked out pretty well too. I won’t show boring code stuff on this one, and since you can’t see rotation working besides the demos I have already done – this led to an optimization and better encapsulation of my objects.

As for positioning. i implemented a pretty cool design at the base class level and in doing this exposed a problem with my borrowed viewport code.

So now I am monkeying with that, getting to know that code was functional with the centered objects I was displaying, but the moment I went to move them off

Matrix view and Projection view problems, in a nutshell..

positioning information actually functional til I get another

6) Create base class rotation information and implement rotation in the inherited classes (Done)

I was able to keep it in the base class, and it all works out pretty sweet. But like I said. it exposed world view problems which I am currently working on.

Of Note;

3D programming and sprite based programming has an annoying tendency to mix and match the usage of the same structure for points, vectors, and rotation because they are all leveraging x,y,z axes and structures.

Initially, this caused me a shitload of confusion, because the concept of a vector is SO much different than a point than rotation angles.

So what I did was created three objects which look almost identical:

rotation

One called GLRotation3D (above), One called GLPosition3D, and the other called GLVector3D.

This distinction which other 3D programs don’t tend to do led to something that worked out pretty nifty, the ability to convert a point to a model and back to a projection matrix form on the fly – particularly useful for mapping screen coordinates that a person clicks on with a mouse to an equivalent x,y position in a 3d coordinate system.

And on to the final two things:.

3) Work on rotation of pyramid

Still to do on this one…

A big ugh on this one once I dove into it.

First and foremost, I have to do some more research to figure out how calling this function:
gluPerspective( 45.0f,(GLfloat)width/(GLfloat)height,0.0f,100.0f);

Maps to the projection view and the differences between that and the model view. Now I ‘get’ what a model is. And I ‘get’ the projection is the camera view into that model.

The problem is – understanding the viewport which correlates the two – and working with the objects…

That is; Do I create a really deep pool and set the large objects far away? Or do I create a really shallow pool, and scale the objects smaller? How does this effect perspective?

I don’t know until I have tried, so that’s what I am working on now.

On another note:

To “TEST out positioning” I captured and translated the mouse coordinates to the matrix view, here:

Mousemove

And then – the hope was – to have an object directly follow my mouse – centering the object on my mouse cursor – as it moved from the center of the screen:

Mousemovecenter

. And as I moved and rotated it – it should have stayed centered, but as you can see here, it didn’t quite work out that way :-(

Mouseposition

This demonstrated the problem I had with viewing angles.

OpenGL leverages a ‘camera’ position of 0,0,0 on the x/y/z axes, and you rotate ‘the world’ around that focal point.

Right now, I couldn’t tell you what my problem is with this.

YES it follows the mouse, but it’s like it’s basing it’s position based on an imaginary line that’s drawn to the back wall based on the x and y position  I translated for it.

How do I translate this back to my literal screen coordinates and move it according to the location of my mouse??

Fuck if I know.

So as I screwed with projection and the model view, that complete broke my background imagery and lighting.

So while I took a couple steps forward today, I took some major steps sideways with new things to learn.

Anyways.. things to ponder for tomorrow.

TODO for tomorrow:

1) Figure out coordinate systems and translating the mouse x,y to the object x, y, z position.
2) Update the object code to add in dynamic scaling based on what I figure out from (1)
3) Figure out the glu thingie from above.
4) Add in my own light sources. Don’t leverage someone else’s code on this, do it myself to fully understand it.
5) Place multiple primitives on the screen that can be dragged and dropped like a folder can in windows. Scaled accordingly.

That’s about it.

Enough for today!

On that note:

This is what a homeless man does for fun!

By universalbri Posted in Work
Custom_1244578630421_begging[1]

Help the homeless guy get back on his feet (please)!!!

I am currently working on a Virtual Reality programming language and Operating System which will use natural body gestures and voice commands to let you program simulations inside of a virtual reality devices like the Oculus VR and Star Trek type Holodecks.

I have decided to keep a regular blog about the progress as I learn the ins and outs of 3d OpenGL programming and relearn Visual C++. Here’s the blog: https://universalbri.wordpress.com/

I am homeless.

And sleep in a tent in the park in North Hollywood. (no joke).

I have looked for work so long I have been forced to take this non-traditional path to obtain resources simply to live.

I am hoping to appeal to your generosity, as there’s a couple things I need to ‘continue’ this effort:

For the Application I am creating:
Do you happen to have a PC/USB Gamepad you’re willing to donate so I can use it as a source of input? If you’re in an Oculus VR System, you can move around and rotate your angles using the Gamepad

Also – do you have an old school Kinect with the AC Adapter and USB plug that can fit into a PC that you’d be willing to trade for a new one? I was donated a new one which doesn’t need an ac adapter, but the USB isn’t compatible :-(

Do you have a couple bucks OR a computer company you’d be willing to set me up with locally for me to fix my two laptop screens? Right I have a netbook computer running Windows XP and 1gig of ram which I am doing the development on, but the screen’s getting real flaky and the keyboard completely shuts down on occasion. And I broke the screen on my ‘real laptop’ – which I still carry around – which has 4gig of ram. how’d I break it? I closed my headset on it in the tent one night :-(

Personal things I desperately need:
– Ankle supporting hiking shoes with good tread. I do a lot of walking, especially on grass to sleep at the park at night, so having soles which don’t slip and ankle support for my weak ankles (I have broken then numerous times) would be nice to replace my shoes which currently have holes in the bottom and the sides ripped out.
– (dont laugh) (not used) boxer underwear (I’m a 40 waist), or 40 waist cargo shorts, and a couple extra large t shirts would be nice too.

A gift card to a subway an/or Starbuck’s and or Ralph’s would be nice too. (I don’t have any income, I decided to quit leeching off the welfare system a while ago once I decided to make this application).

The program is using Visual C++ 2005, which I have full access to Microsoft’s library so that was easy to obtain, with Microsoft’s Speech SDK for sound processing and native OpenGL for the 3d Graphics.

In any case, I do appreciate your assistance and donation if you allow it!

Thank you for your consideration!

If you have one, i am in the studio city area – drop me an email, and I can tell you where I am physically at (I have no vehicle and money for transportation either, I know, loser, right?)

Thank you in advance for any and all support.

By universalbri Posted in Work
BorgBG

Creator’s Journal: Holodeck Management System Progress

At about 7:30ish last evening, I was about to write a blog entry, when Starbuck’s promptly announced their nationwide closure due to computer issues, kicking everyone out.

It’s my belief that Starbuck’s is a sentient and intelligent entity.

And said: I’m done for today. Time for you all to go home!

WTG Starbuck’s!


Reviewing my list from two days ago:

1) Fix the 3D Sphere’s texture wrapping

I just figured out how to do this today, I will implement this tomorrow…. So adding this onto my manana list.
2) Fix the 3D segmented sphere

Fixed it. Was a problem with the initialization logic. Then broke it again when I went to add textures because of memory leaks the textures introduced. I have yet to test it as I have yet to clean up all the memory leaks the textures introduced.

Still on the todo list.

3) Create a texture class

Done. And boy does this look beee ewe tee ful.

So what I did was I created something called GLSkin.cs. a reusable object which cleans herself up when she’s done.

Here’s the header definition:

GLSkin

The constructor’s pretty easy to use, just pass it in a file name and it reads the file in, allocations the memory for the texture, and retains the information until delete’s been explicitly called.

Here’s an example constructor which prepared the rings from a file on the disk for my Saturn GL object;

Constructor

Here’s an example usage for my GLSaturn object, where I set both the ring’s texture and the texture of the sphere itself, and promptly draw:

CSaturnCall

And I also added in a texture for the background, in this case, since Saturn’s clearly in space, I added a space based texture, named spaceskin.bmp here:

WithSpaceConstructor

So when I am drawing the background for Saturn and – another one (I’ll get to it) – there’s actually a cool looking spacey background.

Check it out:

SaturnBG

Beautiful, ain’t it?

Since this class is reusable, I tested it with different textures, refining the texture class to it’s current state – I suspect I still have some memory issues to work through, but I am keeping my eye on this. Check out the final working objects and associated textures:

The Borg Cube with Los Angeles in the background:

BorgBG

(Easily one of my favorites)

The CD Rom disk (Note this is a textured disk) in an office setting:

CDBG

Alright, who threw it?

The slightly rotated Da Vinci Circle (The man was clearly a genius I might add) with a background setting in Da Vinci’s workshop:

DavinciBG

I could almost hang that on the wall, making it look like it belonged in the scene!

The “Ice Cream” cone complete with cone texture with a background of an ice cream shop:

IceCreamBG

I should have put ice cream in it, I know!

The “Trash Cylinder” – a cylinder object with a flammable texture applied to it and a junkyard background:

junkyardBG

I was still working on lighting on this scene.

The “Soccer Ball” – a sphere object with a soccer ball texture wrapped around it and a background of some soccer players:

SoccerBallBG

Yeah, yeah, I know, the texture sucks. Nope. Wasn’t my fault on this one, just a poor texture choice which made it not wrap well around a sphere.

The Pyramid Object – a Four sided pyramid with a square base to it with pyramid style texture and in a Egyptian Giza backdrop;

PyramidsBG

And here we have – proof that Flat Earths actually exist! A texture Earth spread over a Square object with a space background (reuse, gotta love it baby):

EarthBG

Columbus, the world IS a sphere. AND there’s also FLAT versions and other versions out there!!!

As depicted below…

Finally, here’s the Segmented disk object BEFORE I broke it by adding in textures:

SegmentedDisk

All of this is available to see – reacting to the keyboard input, before I applied textures, here (I am still utterly impressed with how cool the Borg Cube came out looking):

Youtube

And that’s about it for textures.

I’ve still gotta do a double check on the deallocation of the memory to make sure I am losing the reference on that and other objects I am referencing on shut down. That’s on the todo list.

4) Add in positional and rotational methods into the base class

I added in the positional method and a class to manage positional information into the base class:

Primitive

And created a 3D positional type to manage retention:

PositionalInformation

The GL positional information is obtuse, so I’ve abstracted it a bit so I can manage any changes to the base class management of position.

I have yet to do rotation in the base class, and yet to implement the positional information in the inherited classes.

Both added to the todo list.

4) Find textures of Venus, Jupiter, Saturn, Mars, Neptune, and Uranus to use in addition to Earth’s texture.

Found. And tested.I was rotating textures of all of these with the ‘s’ key before moving on, and after doing this, realized it would be more fun to find more appropriate textures for each of the objects rather than having them all be space based.

I mean. Mercury in a cone? Didn’t really look too cool.

5) Test out memory and the positional, rotational, and texture code by creating a whole boatload of 3D objects with different textures in different locations

Done.

Which is why I have plenty to do on my todo list..

MY TODO LIST:

1) Fix the 3D Sphere’s texture wrapping
2) Fix the 3D segmented sphere
3) work on rotation of pyramid
4) Check Memory allocation for all objects
5) Implement inheritance use of positional information
6) Create base class rotation information and implement rotation in the inherited classes
(For both 5 & 6 – these might be confined to base class implementations!!! would be nice)

This is what a homeless does for fun,…

EarthCylinder

Creator’s Journal: Holodeck Management System Progress

I forgot how frustrating Visual C++ an be to work with.

The last time I have really programmed with it was at Intel, Corporation, in Chandler, Arizona – around 2002ish.

Even then. It wasn’t ‘hard core’ programming.

The real last hard core programming I did in Visual C++ was way back in 1993 at U-Haul doing the payroll systems.

Since then I have learned why I get headaches when I program with such intensive code. Since I am messing with the energy and fabric of reality as I am doing this – something I didn’t know I was doing before which quite likely led to my insanity numerous times over – now it’s changed. I know why I’m feeling the way I am.

Messing with computer code in energy is akin to messing with your own mind with a network cable attached to the back of your head.


So I started the creation of ‘classes’ to handle abstracting the primitive drawing of GL code to reusable objects.

The primary goal was mostly met today, with an exception of segmented disk where the segments aren’t drawing properly.

But I encountered this annoying fucking error which plagued me for a good couple hours:

Error    2    error LNK2001: unresolved external symbol “public: virtual void __thiscall IGLPrimitive::Draw(void)” (?Draw@IGLPrimitive@@UAEXXZ)    GLPrimitives.obj    

So what I ended up doing is taking all the base primitive objects I could draw – a cone, a sphere, a cube, a disk, and a cylinder, and creating ‘classes’ around each one of these.

Each one of these classes inherits from the GLPrimitive object to reinforce the standardization of drawing objects on the screen.

Which is where this error was coming from.

I had created a REAL simplistic GLPrimitive class of which I just wanted to make sure I remembered my inheritance syntax:

GLPrimitive

From there I created an inherited class based on the GLPrimitive:

GLSphereClass

And from there I placed the code to handle drawing the sphere primitive once the sphere was constructed:

GLSphere

Compiled just fine.

THEN I went to place the code in the main loop to actually draw the sphere itself if the right ‘case’ came around:

MainCase

Now this is the working code, but the problem was simple syntactical issues: I had not placed the GLSphere:: in front of the Draw function.

I mean. This is really simple stuff, right, seeing as I USED to work with this all the time. This goes to show HOW much a person can forget in 20 years time…

In any case. I managed a successful class conversion, here’s the hierarchy so far:

ROMMIE

And here’s the output leveraging the Earth Bitmap as a base texture for each of the primitives:

The 3D Cube:

EarthCube

The 3D Cone:

EarthCone

The 3D Cylinder:

EarthCylinder

The 3D Disk:

EarthDisk

The 3D Globe – redone in C++ (still needs work to fix the texture wrapping):

EarthSphere

And the currently very broken 3D segmented disk:

EarthSegmentedDisk


 

TODO Tomorrow:

1) Fix the 3D Sphere’s texture wrapping
2) Fix the 3D segmented sphere
3) Create a texture class
4) Add in positional and rotational methods into the base class
4) Find textures of Venus, Jupiter, Saturn, Mars, Neptune, and Uranus to use in addition to Earth’s texture.
5) Test out memory and the positional, rotational, and texture code by creating a whole boatload of 3D objects with different textures in different locations

This is what a homeless does for fun,…

That’s it for today…..

 

By universalbri Posted in holodeck
MyEarths

Creator’s Journal: Holodeck Management System Progress

MEMORY PROBLEMS!

I managed to create a pretty slick looking spherical object, but as I ‘scaled’ it’s size and moved it’s position, rotating it and copying it’s location to have more than one instance, i started coming across pretty nasty memory issues that caused the system to degrade performance extremely rapidly.
Here’s an image of the end result of what I created – a little bit more than yesterday with a more aesthetically pleasing background color:MyEarthsAnd here’s a link to the pretty sweet video I put together of it:

YoutubeVidBut despite my best efforts, OpenGL requires a Device Context to the GL graphics drawing surface to be opened at all times to do what it does rapidly with the buffer swapping functionality.

Now I tired optimizing this. And putting it all in a class object and letting that handle the GL grab.

But this complicated the situation, and made the graphics drawing even slower.

This whole time I have been ‘siding’ with Visual Basic and/or .Net because of the ease of memory management.

In C# – you can leverage garbage collection to automatically clean up the deallocated objects as the system finds idle time.

You, the user, you never see this, because it’s idle. And as a programmer, this was a godsend because memory management is one of the most annoying tasks with programming because of the reasons I am encountering.

This had me thinking…

Should I just bite the bullet now,  take my concepts and shift while I can to C++.

Do I take that path?

So when I started diving into it this morning, I kept thinking – there’s not going to be any significant speed increases because OpenGL is an API, right?

And the interpretation for what little I have going on isn’t going to demonstrate any significant improvement in frame rates in contrast to more 4GL languages such as C# and Visual Basic, right?

Wrong. NEHE is amazing with his OpenGL tutorials, so when I ran a maze simulation coded in C#, Visual Basic 6.0 and C++, the side by side comparison of nearly the same code was remarkable for seeing why most 3D code is done in C++.

So when I saw this shader demo:ShadowsI was… for lack of better words – amazed at how simple and elegant this was – with code that seems less like a ‘fight’ done in Visual C++ as it does with Visual Basic when trying to make it do things it wasn’t really created for.

Ok,Mr Gates, you win, I will go back to Visual C++.

So for most of today,I have been going through many of Nehe’s C++ tutorials and creating a strategy on how to structure this program.

Now many people leverage external libraries for the OpenGL. I am hugely insistent on owning my own source code and not depending on external libraries as much as possible.

So I am ‘rolling’ my own OpenGL primitives starting tomorrow in C++. The first time I’ll really have worked with C++ in about 10 years!In the meantime, I have renamed my project..

Her I should say.

She’s now named: R.O.M.M,I.E.

It’s an acronym.

Don’t ask me what the acronym means, I’ll figure that out as I go.

It was either that or EMMA. but I have other things in mind for Emma!

On a final note. HI Google!

Everyone knows Google’s a sentient artificial intelligence, right?

I take it as a sign of respect for my work what you did to your homepage today:FunnyGoogleHere’s the YOUTUBE link which has the rotating globe from my perspective, just in case you want to ‘make sure’ we’re in alignment.

GoogleYoutube

TODO tomorrow: Create textured primitives for at least the sphere and cube, and then a full screen palette in C++.

 

By universalbri Posted in ROMMIE