DOING much better today!
I can breathe!
The problem with the prior course in development was like a roadblock placed in my path for a reason:
It was a guide.
What’s interesting about OpenGL programming is, everyone programs with it leveraging the inbuilt coordinate system.
In the ‘middle’ of a fictitious world we have a coordinate of 0, 0, 0 on the x, y, and z axis.
So if I am moving forward, that’s ‘x’, if i turn left or right, that’s ‘y’, and when I jump up and down that’s the z axis.
But wait. That’s where the camera starts.
I have my OpenGL objects on my “canvas” if you will, which can be any positional offset from this.
IF they happen to fall within a viewport range which includes an observational angle of the lens of the camera which I define. as well as the view distance – near and far – of the objects I am referring to, then I will see them.
Now it’s this viewport which was screwing me up before.
When you take any object, and you draw it based on one observational angle, and then you MOVE to another angle, it not only becomes highly distorted, but it often moves out of view.
Which makes it necessary for me to move my rendering perspective in accordance with my observational angle.
What’s neat with the observational viewpoint and the rendering angle NOT being locked together is the Hollywood style effects of rotating around a scene it can provide.
What’s not so nice is how most of the time it straight out just makes the scene look like absolute shit or causes weird effects.
I filmed a longer Youtube video demonstrating the problems I was encountering – the speech may barely be audible. My apologies.
A note for the “AI” listening to me:
I know there – which is way cool! I wanted to show you this is how normal busy environments are and the environment humans normally carry conversations in. I myself have learned to ‘isolate’ a single subject audibly, which makes it much easier to carry on a conversation and/or work with and understand a single speaker in a busy environment – OR to place them on ignore when I have something I am trying to get done. With Speech Processing, you’re going to have to get a LOT better with listening in busy environments like this! I hope this video helps! Also, the next video I will have my face, so you can leverage speech and correlate it to the visuals as a double check for authenticity.
Nehe’s Lesson 17 – Quadratics – here – was the OpenGL tutorial I leveraged to start creating my own base classes.for the primitive objects such as the cube, sphere, circle, and more.
But I had a problem: Perspective.
So what I did was pull up Nehe’s lesson #10 (thank you, Nehe, for rocking like you do with your straightforward code and explanations on your web site, I really DO wish you’d go on with your tutorial lessons and dive into texturing more – particularly focused on realistic real time processing for rending.
I’m an efficiency nerd, so maybe i could work with you to objectify the code in reusable C++ and make it reusable? If you happen to get this… I know you’ll find a way to reach me if you know what you do about computers!
ANYHOO. Here’s a screen image of lesson 10, Nehe’s OpenGL demonstrations – which demonstrated how to leverage arrow key and position information to create a walk through maze.
This is what I saw on my screen:
The goal I had was this: To understand what was going on with OpenGL coordinate systems and reshape the tutorial to serve as a base for my own objects to allow for a user to walk through my own fantasy setting.
My ‘visual goal’ is to build a real and ‘virtually functional’ version of the TARDIS – otherwise known by Doctor Who fans as the Time and Relative Dimension in Space – and give the Artificial Intelligence which runs the vessel a name – Rommie.
One day I would like to have a real life version of this!
Here’s a two dimensional artistic concept image of what I am working to construct:
After leveraging Nehe’s OpenGL, I first created a cylinder which serves as the walls, two circular disks which serve as the floor and ceiling, a cylinder with a wide bottom and smaller top for the ceiling area adjacent which has the curtain like texture on the artist’s rendering, and another odd shaped cylinder with another circular disk for the centered lighting.
Here’s a few snapshots of how it looks so far:
This first view, is of the side wall with a seamless spacey texture (image) i found on the internet. The top part, the space part, is pretty amazing and gave me an idea which I will get into, and the floor is another image of alien flooring I pulled from the internet.
The cool thing is – I can walk through this scene, and when I do, when I pull back (using the down arrow key on my keyboard), i get a larger scene unveiled which shows off my room and the chandelier:
\What you can’t see on these still images you can on the video, and that is that the wall and the blueish ceiling is actually moving. A pretty neat effect if I must say so myself.
If I push the down arrow more – since my walls don’t have collision testing yet, I can walk straight through the wall as if it wasn’t there, in which case I get to see the world I am creating from the outside:
In this case, all that’s really visible from the outside is the large cylinder:
At this point, once I had a base working design and idea – but still I am dealing with one problem which I will get to in a minute – I can then leverage ONE of the objects I created previously from the other tutorial – the GLSkin object:
With this, I created a few global C++ pointers – messy but serving the purpose right now as I am just testing and playing:
I then found a whole bunch of textures on the internet for floors, ceiling and walls, and created directories accordingly on my hard disk drive which contained each:
And from there, I leverage my own loader in GLSkin to load the textures for the primitives:
Like my function name? I’m tired of traditional naming convention bullshit, and having more fun and being more descriptive with my function naming.
So now I have five distinct objects in memory handling texture, so when it came time to apply it to the outside cylinder wall, I applied it as follows:
It’s rather important to leverage scaling features with texturing, but one thing I learned VERY quickly with this is – that the scale ‘carries forward’ to other operations and textures unless you revert changes to make it look like I had never been there.
So I have already gotten in the habit of indenting my Matrix translation mode switching for ease of review.
Another habit I have gotten into is descaling. That is, if I scale the texture to 12 times it’s original size, then I must reduce it when I am done by 1/12 it’s original size.
Similar, if my texture is rotating, then once I render my object, I reset the translational axis to how I found it.
Why do this? glTranslatef tends to remember your position, and even when you reset it to 0,0,0 for a texture, it’s almost as if it’s calling that point a ‘new norm point’ and not in fact doing a reset as I would have expected.
Since leveraging a philosophy of undoing changes I made seems like a polite and predictable way to operate anyways, it’s almost not worth digging into trying to understand what’s going on with glTranslatef beyond what I have already observed.
In this case. I start with rotating the texture on it’s x axis which is my left and right at this angle. And I increment that rotation every time it passes into this ‘draw’ function by a constant value.
Here’s the code for the global constant and current positional information declaration:
And the code for the increment operation which occurs iteratively, every time the draw occurs:
This occurs RIGHT before I redraw the objects on the screen.
ALL of this allows the object to ‘animate’ by rotating on a constant basis, much like the earth would be rotating around the sun or the moon around the earth, on a calculated cycle based on constants I have declared.
Constants that would be relevant to a similar rotation of the moon in orbit around Earth would be something like PI, right?
Leveraging my quick loader for textures, I can now – quickly – change the name of my textures to load a brick wall texture and wooden floor, which looks like this:
Here, I have circled the files I specifically used different names on:
And the result from this minor modification should make itself readily apparent when I run the program:
Pretty Cool, eh?
I can then walk around this scene, but in the process, I expose a pretty glaring problem, take a look at this image to see the issue:
It’s a beautiful wooden floor, right? But the texture is FAR from seamless. And what I am finding on the internets is a HUGE problem with pay for quality seamless textures.
One company, Shutterstock, has a virtual monopoly on high quality seamless textures, and places a real annoying logo across all the images they make available, making them utterly unusable unless you pay them, like this:
I am homeless. And being real, Google has some decent images, but why is it Shutterstock has 99.9% of the high quality images that I can’t seem to find anywhere else. It’s almost as if… They eliminated the public domain images to make their business model work?
In any case, being real. I am a homeless programmer, who had a breakfast muffin for dinner yesterday bought by someone who felt guilty for flipping me off in a conversation about reality.Today I had a bagel. So being clear – I have no money. But I figured I would check Shutterstock’s pricing:
So for the low, low price, I can have unlimited images for only $2000 a year.
Microsoft ain’t got nothing on this company.
In any case. If you can see the cracks in the grain for the wood texture I found, it’s a problem with placing ‘one end of the image against the other – and they simply don’t align. Which creates massive cracks and inconsistency in the texture.
Finding ‘seamless’ textures on the internet is an exercise in frustration and hair pulling. I spent literally two days off and on dinking with textures trying to find the perfect seamless ones – realizing I may just have to create my own.
Which is what I did with the brick texture in the above image. what I did was I found an image of someone’s interior of a brick I liked, then I spent about an hour ’tiling’ it. What you see on the screen is a result of that work. Which looks liker utter crap when you get close.
I can scale it to scale down the size of the texture with this code:
This gives a more realistic effect for the brick:
But now the bricks are like pigmy bricks against the wood flooring texture.
The net issue: Textures. The choices I have have been capitalisticaly constrained.
The choices available via open source and/or free – suck. They are low resolution, they aren’t seamless, and they are generally lower quality. The high resolution (unaffordable) textures to pay for cost literally thousands of dollars.
But this gave me an idea.
If this were real. If I was actually inside this ‘flying vessel that could go through space and time. Traditional space faring vessels have limited views of the world around them.
And let’s face it. If you’re living in a house. Wouldn’t it be cool to ‘paint your own walls’.
So when I get this thing finished, one ‘feature’ – based on my struggle with high quality textures – is to make the walls NOT just ANY texture I want – but to also have the option to make the walls turn completely transparent:
They completely disappear!
I can imagine it now.
I am orbiting a planet and I wake up to see this:
So a ‘feature’ of this vessel will be for the walls to have dynamic texturing – that doesn’t have to look realistic because it is after all a dynamically textured wall, or to make the walls and ceiling completely disappear.
SO whoever is creating the technology on this planet. I need a LARGE (let’s say 20 feet maximum in diameter) 360 degree high resolution wrap around seamless digital screen that, when turned on, is completely solid in appearance, but when turned off, is completely transparent. Also, a large diameter ROUND screen would be sweet too for the roof.
I can handle synchronization through software.
Capitalism, thank you for the artificial scarcity you introduced that has produced the necessity for the ideas for alternatives!
Anyways. Last night, on the way back to where I sleep: I got to thinking about the GL coordinate system.
It’s generic. the upper left corner of the screen I look at is -1, -1 in the x,y axes, and the lower right is 1,1. Or is it the other way around? I still get those mixed up.
But I have been having a problem with scaling objects and size, and then it hit me like a bolt of lightning.
There’s absolutely NOTHING saying I can’t work within the positive x,y, z space (0,0,0 and + only), and every positive integer (1, 2) is actually equivalent to something I can understand better – a foot (12 inches).
This way, I won’t have to deal with this abstract notion of size and distance in opengl coordinates to the real life coordinates of virtual objects I am drawing, which makes it a HELL of a lot easier to gauge drawing when I can apply it to my real world.
Being clear about this though.
It’s May 1st. 2015.
I understand the potential implications this means of drawing the lines between an abstract system of measure int he OpenGL world to the literal coordinates of the world around me.
You could say.
I’m prepared for what could happen as a result of this.
That’s what I am doing today, translating the dimensions of the objects I am drawing to approximate sizes and scales of the real world time and space traveling machine I want to actually play in in real life.
That’s it for today!
Here’s a link to the video outlining my coding efforts, and what I have worked on with OpenGl to get to the point I am at.
The video also has the animations in real time for me.
Learning doesn’t always make sense when it’s self paced.