Offscreen Rendering

libavg can render more than one scenegraph at once. One of these canvases is displayed on the screen, and the others are rendered to an offscreen buffer and can be used as the content of image nodes. Each canvas renders a complete tree of libavg nodes.

Internally, canvases are rendered to OpenGL Framebuffer Objects and used as input textures for image nodes.

Concept

You can do lots of things with offscreen canvases. Some of them are not immediately apparent, so here are a few examples:

  • Render once, display several times: If a tree of nodes is displayed at several places on the screen, you can render it once into the canvas and use the canvas in several places. This saves processing power. A canvas is also the only way to display a camera image several times.
  • Render at print resolutions: A canvas can be very large (a typical maximum size is 8192x8192 pixels), so you can render things that are a lot larger than screen size.
  • Prerender complicated subscenes: A canvas can be set up to render on demand, so complicated subscenes can be rendered only when they change and not every frame.
  • Render an animated scene and write it to disk as a video (using the VideoWriter).
  • Use effects like blur or shadow on a group of nodes.
  • Use a complex scene as a mask for a node (using setMaskBitmap).

Usage

An offscreen canvas is just a regular libavg tree with a CanvasNode at it's root. It is instantiated by calling Player.createCanvas(). This causes the canvas to be registered with the player. By default, it will be rendered once per frame, directly before the visible avg window is rendered. The resulting texture can then be displayed in one or more ImageNodes by setting an appropriate href:

canvas.py

 1#!/usr/bin/env python
 2# -*- coding: utf-8 -*-
 3
 4from libavg import avg
 5
 6player = avg.Player.get()
 7
 8offscreenCanvas = player.createCanvas(id="londoncalling", 
 9        size=(320,240))
10avg.WordsNode(pos=(10,10), text="London Calling", font="Arial", 
11        parent=offscreenCanvas.getRootNode())
12
13mainCanvas = player.createMainCanvas(size=(640,480))
14rootNode = mainCanvas.getRootNode()
15avg.ImageNode(href="canvas:londoncalling", parent=rootNode)
16
17player.play()

Of course, canvases can also be deleted again:

1player.deleteCanvas("londoncalling")

So far, you can achieve the same result by just placing the WordsNode in the onscreen avg tree, but offscreen rendering gives you a lot more flexibility. By adding a second image node with the same href, the same texture is displayed again. This doesn't take up additional space on the graphics card, and if the offscreen canvas is complex, it is only rendered once. Of course, the image can also be scaled, thereby scaling the complete canvas. Setting the opacity fades the complete (pre-rendered) texture - which in some situations is a lot smoother than fading each component individually.

Event Routing

By default, events are only delivered to the nodes in the main avg tree. This means that if a user clicks on an image that has a canvas href, the event goes to that image node. Sometimes, it makes sense to route events through the containing image nodes to the nodes in the offscreen canvas. Setting canvas.handleevents to True has this effect. In that case, events delivered to nodes inside the canvas report positions relative to the canvas.

Controlling Updates

There are several update strategies for canvases:

  • Once per onscreen frame: The canvas is rendered every time the main screen is rendered. Automatic updates are controlled by the autorender attribute - the default is True.
  • Manual update: The canvas is updated when canvas.render() is called.
  • Once per camera frame: Canvas updates happen every time a camera node gets a new frame. This can be enabled by calling canvas.registerCameraNode().

Usually, the default works very well, but there are some cases where other strategies make sense:

Prerendering

Canvases have a screenshot() method that returns a bitmap. This can be used together with manual updates allow prerendering of screen elements. The standard keyboard does this, for example, to extract a bitmap per key from a bitmap that contains the complete keyboard. This happens on initialization:

Keyboard Initialization Pseudocode

 1def createImage(self, pos, size, ovlHref):
 2    canvas = player.createCanvas(size=size)
 3    avg.ImageNode(href=ovlHref, pos=-self.pos, 
 4            parent=canvas.getRootNode())
 5    canvas.render()
 6    self.keyBitmap = canvas.screenshot()
 7    player.deleteCanvas('offscreen')
 8
 9for each key:
10    key.createImage(key.pos, key.size, key.ovlHref)

For each key, a small temporary canvas as big as one key is created. The keyboard bitmap - this bitmap contains the complete keyboard - is rendered into the canvas, shifted so that only the current key is visible. The result is saved into a new bitmap. After the algorithm runs, there is one bitmap per key.

Rendering Modified Camera Images

Effects or other modifications to camera images only need to be rendered once for every camera image received. This works very well, for instance, to reduce the GPU load caused by the ChromaKeyFXNode.

Image Quality

Canvases have their own multisampling and mipmapping settings. By default, they are not multisampled (http://en.wikipedia.org/wiki/Multisampling) and no mipmaps (http://en.wikipedia.org/wiki/Mipmap) are generated. The mipmap settings of image nodes that contain the canvas are ignored.

Canvas with multisampling and mipmapping turned on

1player.createCanvas(id="londoncalling", size=(320,240),
2        multisamplesamples="16", mipmap="True")

On current discrete graphics cards (i.e., non-onboard graphics), mipmapping is usually performed in hardware and takes a negligible amount of time.