background preloader

Polygonal Map Generation for Games

Polygonal Map Generation for Games
I wanted to generate interesting game maps that weren’t constrained to be realistic, and I wanted to try some techniques I hadn’t tried before. I usually make tile maps but instead used a different structure. What could I do with 1,000 polygons instead of 1,000,000 tiles? The distinct player-recognizable areas might be useful for gameplay: locations of towns, places to quest, territory to conquer or settle, landmarks, pathfinding waypoints, difficulty zones, etc. I generated maps with polygons, then rasterized them into tile maps that looked like this: Most procedural map generators, including some of my own previous projects, use noise functions (midpoint displacement, fractal, diamond-square, perlin noise, etc.) to generate a height map. There were three main things I wanted for this project: good coastlines, mountains, and rivers. First, try the demo! Every project will have its own gameplay constraints. Polygons The first step is to generate some polygons. Map Representation Islands

Tutorial 1 - MachinationsWiki From MachinationsWiki (Note: it might take a while for all the dynamic content to load..) Introduction During his 1999 lecture at The Game Developers Conference Marc LeBlanc introduced feedback loops to the game design world (LeBlanc, 1999). Since then, feedback loops have been discussed by a number of influential designers, including Salen & Zimmerman (2003), Adams & Rollings (2007) and Fullerton (2008). Feedback Loops The first thing you need to know about feedback loops, is that with feedback I am not referring to feedback aimed at a player to inform him or her about the state of the game. What, you might ask, does this have to do with games? An example of a feedback loop can be found in the well-know game of Monopoly. You might notice the run button on the lower left corner of the diagram. One of the interesting characteristics of Machinations diagram is that it visualizes the feedback loop. Positive and Negative Feedback What then, is the effect of a feedback loop on game? References

Keeping track of items in a pool This is a fantastic question because, it's something you have to do pretty much every single time you use a game engine for any reason. Indeed, "handling a pool" of gameObjects is so common I reckon one day Unity will add a little system for it. In all video games you have "pools" of objects, for example bullets, clouds, enemies, whatever it may be. You might have, for example, 40 bullets on standby off screen. When someone uses a machine gun, you use the bullets in order. Many developers or companies have their own little system they use to generalise this, or, you might just have some typical code you modify each time for each little pool. Note, if you are not a native English speaker, in the code below by "model" I mean model as in Claudia Schiffer, not model as in Maya. You'd have a gameObject named "enemies" which holds all the enemies. Now, under the "enemies" gameObject, you'd have at editing time just the one "model enemy", your one perfect enemy. So like this, So, enjoy.

Guerrilla Tool Development I have a weak spot for cool game development tools. Not the IDE, or art or sound tools – I mean the level editors, AI construction tools – those that developers develop specifically for their games. Those that you know could help you multiply your content, and craft your game just a little bit better. Unfortunately, if you work on a small team, developing sophisticated tools like that is pretty much out of the question. That does not mean you have to hardcode everything, though. Here I will give you some ideas for getting tools for your game on a tight budget. Know your content creation tools inside out Before you even think about developing customised tools, it is extremely important to know your content-creation tools extremely well – even if you are not the content creator. As a programmer, you should focus on the following features: Automation Many art tools support some kind of batch processing. Data driven design This goes hand-in-hand with automation. Extensions Organisation Layers Tree

Jayelinda Suridge's Blog - Modelling by numbers: Part One A The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. An introduction to procedural geometry Procedural geometry is geometry modelled in code. Instead of building 3D meshes by hand using art software such as Maya, 3DS Max or Blender, the mesh is built using programmed instructions. This can be done at runtime (the mesh does not exist until the end-user runs the program), at edit time (using script or tool when the application is being developed), or inside a 3D art package (using a scripting language such as MEL or MaxScript). Benefits of generating meshes procedurally include: Variation: Meshes can be built with random variations, meaning you can avoid repeating geometry. Scalability: Meshes can be generated with more or less detail depending on the end-user’s machine or preferences. Here are some procedural examples from my own game projects:

Matt's Webcorner - Marching Cubes Marching cubes is a simple algorithm for creating a triangle mesh from an implicit function (one of the form f(x, y, z) = 0). It works by iterating ("marching") over a uniform grid of cubes superimposed over a region of the function. If all 8 vertices of the cube are positive, or all 8 vertices are negative, the cube is entirely above or entirely below the surface and no triangles are emitted. Otherwise, the cube straddles the function and some triangles and vertices are generated. Since each vertex can either be positive or negative, there are technically 28 possible configurations, but many of these are equivalent to one another. We iterative over all cubes, adding triangles to a list, and the final mesh is the union of all these triangles. Even more intelligent forms of marching cubes, which adapt their cube resolution to match local surface complexity, produces pretty low quality meshes. Nevertheless marching cubes is useful for its simplicity.

Polygonising a scalar field (Marching Cubes) Also known as: "3D Contouring", "Marching Cubes", "Surface Reconstruction" Written by Paul Bourke May 1994 Based on tables by Cory Gene Bloyd along with additional example source code marchingsource.cppAn alternative table by Geoffrey Heller.rchandra.zip: C++ classes contributed by Raghavendra Chandrashekara.OpenGL source code, sample volume: cell.gz (old)volexample.zip: An example showing how to call polygonise including a sample MRI dataset.Improved (2018) Qt/OpenGL example courtesy Dr. This document describes an algorithm for creating a polygonal surface representation of an isosurface of a 3D scalar field. There are many applications for this type of technique, two very common ones are: Reconstruction of a surface from medical volumetric datasets. The fundamental problem is to form a facet approximation to an isosurface through a scalar field sampled on a rectangular 3D grid. The indexing convention for vertices and edges used in the algorithm are shown below Another example Source code

alpha new The marching cubes algorithm is a well-known algorithm in the field of computer graphics that provides a rather straight-forward way of generating polygon meshes from voxel data. Naive implementations that follow the basic description of the algorithm in question are not too hard to find, yet many of these example implementations suffer from the fact that they are generating non-unified / non-indexed triangle meshes, besides evaluating voxel density functions up to eight times in exactly the same spot. This article addresses both of these issues providing rather simple, yet fast and efficient solutions yielding perfect results. Premises The basic naive implementation to be optimized throughout this article is assumed to be of the form: static const D3DXVECTOR3 relativeCornerPositions[8] = { {0,0,1}, {1,0,1}, {1,0,0}, {0,0,0}, {0,1,1}, {1,1,1}, {1,1,0}, {0,1,0} }; D3DXVECTOR3 vertices[MAX_VERTICES];unsigned int indices[MAX_INDICES];int vertexCount = 0;int indexCount = 0; Results

www.iki.fi/sol - Tutorials - Interpolation Tricks Contents 1. Why 0..1 Range While making demos I've found different interpolation tricks to be extremely valuable. Generally speaking, when making some kind of animation, we know the starting and ending positions, and want to transition between these. Values between 0 and 1 have some rather interesting properties, including the fact that you can multiply any value between 0 and 1 with another value between 0 and 1, and the result is guaranteed to be between 0 and 1. (*2) These properties can be used to tweak the way we move from 0 to 1 in various ways. 2. Let's say we want to move the variable X between points A and B in N steps. for (i = 0; i < N; i++) { X = ((A * i) + (B * (N - i))) / N; } Or, put another way, this becomes: for (i = 0; i < N; i++) { v = i / N; X = (A * v) + (B * (1 - v)); } where v ranges from 0 to 1. (*3) 3. Moving from 0 to 1 in N discrete steps is called linear interpolation, or "lerp", for short: 4. #define SMOOTHSTEP(x) ((x) * (x) * (3 - 2 * (x))) 5. These look like this:

Decision Modeling and Optimization in Game Design, Part 1: Introduction Most of game design is a process of search. When we design, we are evaluating many different possible design configurations to solve a given design problem, whether it be the way the rooms in a dungeon are connected, the set of features and capabilities that different types of game agents will possess, the specific “magic numbers” that govern unit effectiveness in a combat system, or even the combination of features our game will include in the first place. Just as an AI-driven character will use a pathfinding system in a game to navigate through the game world, design involves navigating through a very-high-level space of possible configurations by taking some initial configuration and iteratively modifying it. We look carefully at the state of some aspect of our design – whether it be our combat system, one of the parts of our game world, a technology tree in a strategy game, or what have you – and attempt to find a way to improve it by changing that configuration. A Definition Quick!

Jayelinda Suridge's Blog - Modelling by numbers: Part One A The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. An introduction to procedural geometry Procedural geometry is geometry modelled in code. Instead of building 3D meshes by hand using art software such as Maya, 3DS Max or Blender, the mesh is built using programmed instructions. This can be done at runtime (the mesh does not exist until the end-user runs the program), at edit time (using script or tool when the application is being developed), or inside a 3D art package (using a scripting language such as MEL or MaxScript). Benefits of generating meshes procedurally include: Variation: Meshes can be built with random variations, meaning you can avoid repeating geometry. Scalability: Meshes can be generated with more or less detail depending on the end-user’s machine or preferences. Here are some procedural examples from my own game projects:

Generating Fur in DirectX or OpenGL Easily - Tutorials made easy! Fur Effects - Teddies, Cats, Hair ....by bkenwright@xbdev.net Have you ever watched Monsters Inc? Or other movies like Shrek? Or possibly played a computer game on your xbox where the hero character has realistic looking fur, and wondered just how you could do that? I bet you thought it was really really hard. One particular technique of creating good looking fur without killing yourself with maths and algorithms and weeks of processing time, is to use shell texturing! Now this is not one of those easy tutorials that you can swish through, well I couldn't....so your going to have to stock up on coffee to get through this puppy! Feedback is always welcome on this....sort of a trial and error thing for me...reading articles and testing out new ideas that come to me while watching tv :) There's all sorts of things going on with fur! Well it's going to be a late night for me... Lets have a look at the basic idea of layers and textures. float3 P = IN.position.xyz + (IN.normal * FurLength);

Introduction - Lesson 1: Introduction - Interactive 3D Graphics When does the course begin? This class is self paced. You can begin whenever you like and then follow your own pace. It’s a good idea to set goals for yourself to make sure you stick with the course. How long will the course be available? This class will always be available! How do I know if this course is for me? Take a look at the “Class Summary,” “What Should I Know,” and “What Will I Learn” sections above. Can I skip individual videos? Yes! How much does this cost? It’s completely free! What are the rules on collaboration? Collaboration is a great way to learn. Why are there so many questions? Udacity classes are a little different from traditional courses. What should I do while I’m watching the videos? Learn actively!

Real-Time Rendering · Tracking the latest developments in interactive rendering techniques guest post by Patrick Cozzi, @pjcozzi. This isn’t as crazy as it sounds: WebGL has a chance to become the graphics API of choice for real-time graphics research. Here’s why I think so. An interactive demo is better than a video. WebGL allows us to embed demos in a website, like the demo for The Compact YCoCg Frame Buffer by Pavlos Mavridis and Georgios Papaioannou. A demo gives readers a better understanding than a video alone, allows them to reproduce performance results on their hardware, and enables them to experiment with debug views like the demo for WebGL Deferred Shading by Sijie Tian, Yuqin Shao, and me. WebGL runs on desktop and mobile. Android devices now have pretty good support for WebGL. WebGL is starting to expose modern GPU features. WebGL is based on OpenGL ES 2.0 so it doesn’t expose features like query timers, compute shaders, uniform buffers, etc. WebGL is faster to develop with. Try it. Check out the WebGL Report to see what extensions your browser supports.

Distance Estimated 3D Fractals (Part I) During the last two years, the 3D fractal field has undergone a small revolution: the Mandelbulb (2009), the Mandelbox (2010), The Kaleidoscopic IFS’s (2010), and a myriad of equally or even more interesting hybrid systems, such as Spudsville (2010) or the Kleinian systems (2011). All of these systems were made possible using a technique known as Distance Estimation and they all originate from the Fractal Forums community. Part I briefly introduces the history of distance estimated fractals, and discuss how a distance estimator can be used for ray marching. Part II discuss how to find surface normals, and how to light and color fractals. Part III discuss how to actually create a distance estimator, starting with distance fields for simple geometric objects, and talking about instancing, combining fields (union, intersections, and differences), and finally talks about folding and conformal transformation, ending up with a simple fractal distance estimator. The background Raymarching

Related: