Intel Graphics

After the rendering optimization I desribed in my last post, tests with Intel Atom chipset graphics (N10 chipset) uncovered a problem. The system was running in software rendering mode, which slows things down by a factor of about a thousand. It turns out that more than two texture accesses in a shader are too much for the hardware. Additionally, lots of Intel chips render all vertex shaders in software, and that also causes a tenfold slowdown if the libavg 3-line vertex shader is in use.

So now, there’s a second rendering path with minimal shaders that does vertex processing the old-fashioned way (glMatrixMode etc.) and uses a different shader for those nodes that don’t need any special processing. Still, I recommend staying away from Intel Atom graphics. There is way better hardware out there at the same price point.

2 thoughts on “Intel Graphics

  1. Hi,
    I am facing the exact same issue. For some project compulsion we are stuck with Intel Atom and LibAVG performance is really poor. Can you please elaborate how can I improve the rendering for Atom machine ?

  2. Current libavg should run ok out of the box with Atoms. It’s just that Atom rendering performance in general (i.e. nothing to do with libavg) is not very good :-/.

    That said, ‘poor performance’ can have a lot of different causes. What exactly are you doing? What kind of nodes, how many? Also, you can set the AVG_LOG_CATEGORIES env var to dump a frame profile at the end of execution – that might give you some hints (see http://www.libavg.de/reference/current/misc.html#libavg.avg.Logger).

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>