No announcement yet.

Vertex Arrays and Indexed Vertices

  • Filter
  • Time
  • Show
Clear All
new posts

  • Patrice Terrier

    I am currently doing much OpenGL 3D programming with millions polygons, and the ultimate performance could be reached to switch everything to VBO (working with GPU and GLSL).
    Also 64-bit is a mandatory to reach the ultimate performance in the 3D's world, especially because the latest nVIDIA drivers are not anymore available in 32-bit.
    Here is a link to a video of what i am doing with my current ObjReader64 project

    Leave a comment:

  • Petr Schreiber jr
    Hi Gary,

    general rule of thumb is - use glBegin/glEnd to learn OpenGL way of understanding geometry. To learn there are vertices, their colors, normals, texture coordinates and more.

    Using this approach for anything else than easy tasks as rendering few hundreds triangles/quads will just hold you back from unleashing the potential your GPU has. Why? Plenty of function calls. For each vertex, color, ... one call, done on CPU.

    For performance, check vertex arrays, vertex buffer objects. They are less pleasant to use, but you can define an object of any number of triangles with fixed number of function calls, by passing pointers to array with all the data stored. You do not even have to use indexing to see the performance gain.


    Leave a comment:

  • Gary Beene
    started a topic Vertex Arrays and Indexed Vertices

    Vertex Arrays and Indexed Vertices

    From what I've read about vertex arrays and indexed vertices, both are useful in providing better OpenGL performance - that their use can significantly reduce the number of functions calls.

    For example, a cube would normally require each vertex to be specified three times, once for every face that uses it. So 24 vertices would be processed, even though eight would be enough.

    Has anyone done an speed tests to be able to quantify the performance improvements possible with the two techniques?

    And, are there any guidelines of how many vertices/polygons must be in a model before differences might begin to be obvious to a user? I'd guess the number would have to be in the many thousands for it to matter, but I really don't have any experience to make the guess.

    I had Patrice's Naboo STL model (~270K points, line model, no lighting) running yesterday in a revision of gbSTL that uses OpenGL. The revision does not use either technique and yet the model seemed to respond with no delays. So I wondered how many vertices does it take to benefit from the two techniques.

    Just trying to get some clarification on how important the two techniques really are and under what circumstances they should be used ...