Wed, 31 Dec 2014 06:09:35 +0100
Cloned upstream origin tor-browser at tor-browser-31.3.0esr-4.5-1-build1
revision ID fc1c9ff7c1b2defdbc039f12214767608f46423f for hacking purpose.
michael@0 | 1 | Roadmap |
michael@0 | 2 | |
michael@0 | 3 | - Move all the fetchers etc. into pixman-image to make pixman-compose.c |
michael@0 | 4 | less intimidating. |
michael@0 | 5 | |
michael@0 | 6 | DONE |
michael@0 | 7 | |
michael@0 | 8 | - Make combiners for unified alpha take a mask argument. That way |
michael@0 | 9 | we won't need two separate paths for unified vs component in the |
michael@0 | 10 | general compositing code. |
michael@0 | 11 | |
michael@0 | 12 | DONE, except that the Altivec code needs to be updated. Luca is |
michael@0 | 13 | looking into that. |
michael@0 | 14 | |
michael@0 | 15 | - Delete separate 'unified alpha' path |
michael@0 | 16 | |
michael@0 | 17 | DONE |
michael@0 | 18 | |
michael@0 | 19 | - Split images into their own files |
michael@0 | 20 | |
michael@0 | 21 | DONE |
michael@0 | 22 | |
michael@0 | 23 | - Split the gradient walker code out into its own file |
michael@0 | 24 | |
michael@0 | 25 | DONE |
michael@0 | 26 | |
michael@0 | 27 | - Add scanline getters per image |
michael@0 | 28 | |
michael@0 | 29 | DONE |
michael@0 | 30 | |
michael@0 | 31 | - Generic 64 bit fetcher |
michael@0 | 32 | |
michael@0 | 33 | DONE |
michael@0 | 34 | |
michael@0 | 35 | - Split fast path tables into their respective architecture dependent |
michael@0 | 36 | files. |
michael@0 | 37 | |
michael@0 | 38 | See "Render Algorithm" below for rationale |
michael@0 | 39 | |
michael@0 | 40 | Images will eventually have these virtual functions: |
michael@0 | 41 | |
michael@0 | 42 | get_scanline() |
michael@0 | 43 | get_scanline_wide() |
michael@0 | 44 | get_pixel() |
michael@0 | 45 | get_pixel_wide() |
michael@0 | 46 | get_untransformed_pixel() |
michael@0 | 47 | get_untransformed_pixel_wide() |
michael@0 | 48 | get_unfiltered_pixel() |
michael@0 | 49 | get_unfiltered_pixel_wide() |
michael@0 | 50 | |
michael@0 | 51 | store_scanline() |
michael@0 | 52 | store_scanline_wide() |
michael@0 | 53 | |
michael@0 | 54 | 1. |
michael@0 | 55 | |
michael@0 | 56 | Initially we will just have get_scanline() and get_scanline_wide(); |
michael@0 | 57 | these will be based on the ones in pixman-compose. Hopefully this will |
michael@0 | 58 | reduce the complexity in pixman_composite_rect_general(). |
michael@0 | 59 | |
michael@0 | 60 | Note that there is access considerations - the compose function is |
michael@0 | 61 | being compiled twice. |
michael@0 | 62 | |
michael@0 | 63 | |
michael@0 | 64 | 2. |
michael@0 | 65 | |
michael@0 | 66 | Split image types into their own source files. Export noop virtual |
michael@0 | 67 | reinit() call. Call this whenever a property of the image changes. |
michael@0 | 68 | |
michael@0 | 69 | |
michael@0 | 70 | 3. |
michael@0 | 71 | |
michael@0 | 72 | Split the get_scanline() call into smaller functions that are |
michael@0 | 73 | initialized by the reinit() call. |
michael@0 | 74 | |
michael@0 | 75 | The Render Algorithm: |
michael@0 | 76 | (first repeat, then filter, then transform, then clip) |
michael@0 | 77 | |
michael@0 | 78 | Starting from a destination pixel (x, y), do |
michael@0 | 79 | |
michael@0 | 80 | 1 x = x - xDst + xSrc |
michael@0 | 81 | y = y - yDst + ySrc |
michael@0 | 82 | |
michael@0 | 83 | 2 reject pixel that is outside the clip |
michael@0 | 84 | |
michael@0 | 85 | This treats clipping as something that happens after |
michael@0 | 86 | transformation, which I think is correct for client clips. For |
michael@0 | 87 | hierarchy clips it is wrong, but who really cares? Without |
michael@0 | 88 | GraphicsExposes hierarchy clips are basically irrelevant. Yes, |
michael@0 | 89 | you could imagine cases where the pixels of a subwindow of a |
michael@0 | 90 | redirected, transformed window should be treated as |
michael@0 | 91 | transparent. I don't really care |
michael@0 | 92 | |
michael@0 | 93 | Basically, I think the render spec should say that pixels that |
michael@0 | 94 | are unavailable due to the hierarcy have undefined content, |
michael@0 | 95 | and that GraphicsExposes are not generated. Ie., basically |
michael@0 | 96 | that using non-redirected windows as sources is fail. This is |
michael@0 | 97 | at least consistent with the current implementation and we can |
michael@0 | 98 | update the spec later if someone makes it work. |
michael@0 | 99 | |
michael@0 | 100 | The implication for render is that it should stop passing the |
michael@0 | 101 | hierarchy clip to pixman. In pixman, if a souce image has a |
michael@0 | 102 | clip it should be used in computing the composite region and |
michael@0 | 103 | nowhere else, regardless of what "has_client_clip" says. The |
michael@0 | 104 | default should be for there to not be any clip. |
michael@0 | 105 | |
michael@0 | 106 | I would really like to get rid of the client clip as well for |
michael@0 | 107 | source images, but unfortunately there is at least one |
michael@0 | 108 | application in the wild that uses them. |
michael@0 | 109 | |
michael@0 | 110 | 3 Transform pixel: (x, y) = T(x, y) |
michael@0 | 111 | |
michael@0 | 112 | 4 Call p = GetUntransformedPixel (x, y) |
michael@0 | 113 | |
michael@0 | 114 | 5 If the image has an alpha map, then |
michael@0 | 115 | |
michael@0 | 116 | Call GetUntransformedPixel (x, y) on the alpha map |
michael@0 | 117 | |
michael@0 | 118 | add resulting alpha channel to p |
michael@0 | 119 | |
michael@0 | 120 | return p |
michael@0 | 121 | |
michael@0 | 122 | Where GetUnTransformedPixel is: |
michael@0 | 123 | |
michael@0 | 124 | 6 switch (filter) |
michael@0 | 125 | { |
michael@0 | 126 | case NEAREST: |
michael@0 | 127 | return GetUnfilteredPixel (x, y); |
michael@0 | 128 | break; |
michael@0 | 129 | |
michael@0 | 130 | case BILINEAR: |
michael@0 | 131 | return GetUnfilteredPixel (...) // 4 times |
michael@0 | 132 | break; |
michael@0 | 133 | |
michael@0 | 134 | case CONVOLUTION: |
michael@0 | 135 | return GetUnfilteredPixel (...) // as many times as necessary. |
michael@0 | 136 | break; |
michael@0 | 137 | } |
michael@0 | 138 | |
michael@0 | 139 | Where GetUnfilteredPixel (x, y) is |
michael@0 | 140 | |
michael@0 | 141 | 7 switch (repeat) |
michael@0 | 142 | { |
michael@0 | 143 | case REPEAT_NORMAL: |
michael@0 | 144 | case REPEAT_PAD: |
michael@0 | 145 | case REPEAT_REFLECT: |
michael@0 | 146 | // adjust x, y as appropriate |
michael@0 | 147 | break; |
michael@0 | 148 | |
michael@0 | 149 | case REPEAT_NONE: |
michael@0 | 150 | if (x, y) is outside image bounds |
michael@0 | 151 | return 0; |
michael@0 | 152 | break; |
michael@0 | 153 | } |
michael@0 | 154 | |
michael@0 | 155 | return GetRawPixel(x, y) |
michael@0 | 156 | |
michael@0 | 157 | Where GetRawPixel (x, y) is |
michael@0 | 158 | |
michael@0 | 159 | 8 Compute the pixel in question, depending on image type. |
michael@0 | 160 | |
michael@0 | 161 | For gradients, repeat has a totally different meaning, so |
michael@0 | 162 | UnfilteredPixel() and RawPixel() must be the same function so that |
michael@0 | 163 | gradients can do their own repeat algorithm. |
michael@0 | 164 | |
michael@0 | 165 | So, the GetRawPixel |
michael@0 | 166 | |
michael@0 | 167 | for bits must deal with repeats |
michael@0 | 168 | for gradients must deal with repeats (differently) |
michael@0 | 169 | for solids, should ignore repeats. |
michael@0 | 170 | |
michael@0 | 171 | for polygons, when we add them, either ignore repeats or do |
michael@0 | 172 | something similar to bits (in which case, we may want an extra |
michael@0 | 173 | layer of indirection to modify the coordinates). |
michael@0 | 174 | |
michael@0 | 175 | It is then possible to build things like "get scanline" or "get tile" on |
michael@0 | 176 | top of this. In the simplest case, just repeatedly calling GetPixel() |
michael@0 | 177 | would work, but specialized get_scanline()s or get_tile()s could be |
michael@0 | 178 | plugged in for common cases. |
michael@0 | 179 | |
michael@0 | 180 | By not plugging anything in for images with access functions, we only |
michael@0 | 181 | have to compile the pixel functions twice, not the scanline functions. |
michael@0 | 182 | |
michael@0 | 183 | And we can get rid of fetchers for the bizarre formats that no one |
michael@0 | 184 | uses. Such as b2g3r3 etc. r1g2b1? Seriously? It is also worth |
michael@0 | 185 | considering a generic format based pixel fetcher for these edge cases. |
michael@0 | 186 | |
michael@0 | 187 | Since the actual routines depend on the image attributes, the images |
michael@0 | 188 | must be notified when those change and update their function pointers |
michael@0 | 189 | appropriately. So there should probably be a virtual function called |
michael@0 | 190 | (* reinit) or something like that. |
michael@0 | 191 | |
michael@0 | 192 | There will also be wide fetchers for both pixels and lines. The line |
michael@0 | 193 | fetcher will just call the wide pixel fetcher. The wide pixel fetcher |
michael@0 | 194 | will just call expand, except for 10 bit formats. |
michael@0 | 195 | |
michael@0 | 196 | Rendering pipeline: |
michael@0 | 197 | |
michael@0 | 198 | Drawable: |
michael@0 | 199 | 0. if (picture has alpha map) |
michael@0 | 200 | 0.1. Position alpha map according to the alpha_x/alpha_y |
michael@0 | 201 | 0.2. Where the two drawables intersect, the alpha channel |
michael@0 | 202 | Replace the alpha channel of source with the one |
michael@0 | 203 | from the alpha map. Replacement only takes place |
michael@0 | 204 | in the intersection of the two drawables' geometries. |
michael@0 | 205 | 1. Repeat the drawable according to the repeat attribute |
michael@0 | 206 | 2. Reconstruct a continuous image according to the filter |
michael@0 | 207 | 3. Transform according to the transform attribute |
michael@0 | 208 | 4. Position image such that src_x, src_y is over dst_x, dst_y |
michael@0 | 209 | 5. Sample once per destination pixel |
michael@0 | 210 | 6. Clip. If a pixel is not within the source clip, then no |
michael@0 | 211 | compositing takes place at that pixel. (Ie., it's *not* |
michael@0 | 212 | treated as 0). |
michael@0 | 213 | |
michael@0 | 214 | Sampling a drawable: |
michael@0 | 215 | |
michael@0 | 216 | - If the channel does not have an alpha channel, the pixels in it |
michael@0 | 217 | are treated as opaque. |
michael@0 | 218 | |
michael@0 | 219 | Note on reconstruction: |
michael@0 | 220 | |
michael@0 | 221 | - The top left pixel has coordinates (0.5, 0.5) and pixels are |
michael@0 | 222 | spaced 1 apart. |
michael@0 | 223 | |
michael@0 | 224 | Gradient: |
michael@0 | 225 | 1. Unless gradient type is conical, repeat the underlying (0, 1) |
michael@0 | 226 | gradient according to the repeat attribute |
michael@0 | 227 | 2. Integrate the gradient across the plane according to type. |
michael@0 | 228 | 3. Transform according to transform attribute |
michael@0 | 229 | 4. Position gradient |
michael@0 | 230 | 5. Sample once per destination pixel. |
michael@0 | 231 | 6. Clip |
michael@0 | 232 | |
michael@0 | 233 | Solid Fill: |
michael@0 | 234 | 1. Repeat has no effect |
michael@0 | 235 | 2. Image is already continuous and defined for the entire plane |
michael@0 | 236 | 3. Transform has no effect |
michael@0 | 237 | 4. Positioning has no effect |
michael@0 | 238 | 5. Sample once per destination pixel. |
michael@0 | 239 | 6. Clip |
michael@0 | 240 | |
michael@0 | 241 | Polygon: |
michael@0 | 242 | 1. Repeat has no effect |
michael@0 | 243 | 2. Image is already continuous and defined on the whole plane |
michael@0 | 244 | 3. Transform according to transform attribute |
michael@0 | 245 | 4. Position image |
michael@0 | 246 | 5. Supersample 15x17 per destination pixel. |
michael@0 | 247 | 6. Clip |
michael@0 | 248 | |
michael@0 | 249 | Possibly interesting additions: |
michael@0 | 250 | - More general transformations, such as warping, or general |
michael@0 | 251 | shading. |
michael@0 | 252 | |
michael@0 | 253 | - Shader image where a function is called to generate the |
michael@0 | 254 | pixel (ie., uploading assembly code). |
michael@0 | 255 | |
michael@0 | 256 | - Resampling kernels |
michael@0 | 257 | |
michael@0 | 258 | In principle the polygon image uses a 15x17 box filter for |
michael@0 | 259 | resampling. If we allow general resampling filters, then we |
michael@0 | 260 | get all the various antialiasing types for free. |
michael@0 | 261 | |
michael@0 | 262 | Bilinear downsampling looks terrible and could be much |
michael@0 | 263 | improved by a resampling filter. NEAREST reconstruction |
michael@0 | 264 | combined with a box resampling filter is what GdkPixbuf |
michael@0 | 265 | does, I believe. |
michael@0 | 266 | |
michael@0 | 267 | Useful for high frequency gradients as well. |
michael@0 | 268 | |
michael@0 | 269 | (Note that the difference between a reconstruction and a |
michael@0 | 270 | resampling filter is mainly where in the pipeline they |
michael@0 | 271 | occur. High quality resampling should use a correctly |
michael@0 | 272 | oriented kernel so it should happen after transformation. |
michael@0 | 273 | |
michael@0 | 274 | An implementation can transform the resampling kernel and |
michael@0 | 275 | convolve it with the reconstruction if it so desires, but it |
michael@0 | 276 | will need to deal with the fact that the resampling kernel |
michael@0 | 277 | will not necessarily be pixel aligned. |
michael@0 | 278 | |
michael@0 | 279 | "Output kernels" |
michael@0 | 280 | |
michael@0 | 281 | One could imagine doing the resampling after compositing, |
michael@0 | 282 | ie., for each destination pixel sample each source image 16 |
michael@0 | 283 | times, then composite those subpixels individually, then |
michael@0 | 284 | finally apply a kernel. |
michael@0 | 285 | |
michael@0 | 286 | However, this is effectively the same as full screen |
michael@0 | 287 | antialiasing, which is a simpler way to think about it. So |
michael@0 | 288 | resampling kernels may make sense for individual images, but |
michael@0 | 289 | not as a post-compositing step. |
michael@0 | 290 | |
michael@0 | 291 | Fullscreen AA is inefficient without chained compositing |
michael@0 | 292 | though. Consider an (image scaled up to oversample size IN |
michael@0 | 293 | some polygon) scaled down to screen size. With the current |
michael@0 | 294 | implementation, there will be a huge temporary. With chained |
michael@0 | 295 | compositing, the whole thing ends up being equivalent to the |
michael@0 | 296 | output kernel from above. |
michael@0 | 297 | |
michael@0 | 298 | - Color space conversion |
michael@0 | 299 | |
michael@0 | 300 | The complete model here is that each surface has a color |
michael@0 | 301 | space associated with it and that the compositing operation |
michael@0 | 302 | also has one associated with it. Note also that gradients |
michael@0 | 303 | should have associcated colorspaces. |
michael@0 | 304 | |
michael@0 | 305 | - Dithering |
michael@0 | 306 | |
michael@0 | 307 | If people dither something that is already dithered, it will |
michael@0 | 308 | look terrible, but don't do that, then. (Dithering happens |
michael@0 | 309 | after resampling if at all - what is the relationship |
michael@0 | 310 | with color spaces? Presumably dithering should happen in linear |
michael@0 | 311 | intensity space). |
michael@0 | 312 | |
michael@0 | 313 | - Floating point surfaces, 16, 32 and possibly 64 bit per |
michael@0 | 314 | channel. |
michael@0 | 315 | |
michael@0 | 316 | Maybe crack: |
michael@0 | 317 | |
michael@0 | 318 | - Glyph polygons |
michael@0 | 319 | |
michael@0 | 320 | If glyphs could be given as polygons, they could be |
michael@0 | 321 | positioned and rasterized more accurately. The glyph |
michael@0 | 322 | structure would need subpixel positioning though. |
michael@0 | 323 | |
michael@0 | 324 | - Luminance vs. coverage for the alpha channel |
michael@0 | 325 | |
michael@0 | 326 | Whether the alpha channel should be interpreted as luminance |
michael@0 | 327 | modulation or as coverage (intensity modulation). This is a |
michael@0 | 328 | bit of a departure from the rendering model though. It could |
michael@0 | 329 | also be considered whether it should be possible to have |
michael@0 | 330 | both channels in the same drawable. |
michael@0 | 331 | |
michael@0 | 332 | - Alternative for component alpha |
michael@0 | 333 | |
michael@0 | 334 | - Set component-alpha on the output image. |
michael@0 | 335 | |
michael@0 | 336 | - This means each of the components are sampled |
michael@0 | 337 | independently and composited in the corresponding |
michael@0 | 338 | channel only. |
michael@0 | 339 | |
michael@0 | 340 | - Have 3 x oversampled mask |
michael@0 | 341 | |
michael@0 | 342 | - Scale it down by 3 horizontally, with [ 1/3, 1/3, 1/3 ] |
michael@0 | 343 | resampling filter. |
michael@0 | 344 | |
michael@0 | 345 | Is this equivalent to just using a component alpha mask? |
michael@0 | 346 | |
michael@0 | 347 | Incompatible changes: |
michael@0 | 348 | |
michael@0 | 349 | - Gradients could be specified with premultiplied colors. (You |
michael@0 | 350 | can use a mask to get things like gradients from solid red to |
michael@0 | 351 | transparent red. |
michael@0 | 352 | |
michael@0 | 353 | Refactoring pixman |
michael@0 | 354 | |
michael@0 | 355 | The pixman code is not particularly nice to put it mildly. Among the |
michael@0 | 356 | issues are |
michael@0 | 357 | |
michael@0 | 358 | - inconsistent naming style (fb vs Fb, camelCase vs |
michael@0 | 359 | underscore_naming). Sometimes there is even inconsistency *within* |
michael@0 | 360 | one name. |
michael@0 | 361 | |
michael@0 | 362 | fetchProc32 ACCESS(pixman_fetchProcForPicture32) |
michael@0 | 363 | |
michael@0 | 364 | may be one of the uglies names ever created. |
michael@0 | 365 | |
michael@0 | 366 | coding style: |
michael@0 | 367 | use the one from cairo except that pixman uses this brace style: |
michael@0 | 368 | |
michael@0 | 369 | while (blah) |
michael@0 | 370 | { |
michael@0 | 371 | } |
michael@0 | 372 | |
michael@0 | 373 | Format do while like this: |
michael@0 | 374 | |
michael@0 | 375 | do |
michael@0 | 376 | { |
michael@0 | 377 | |
michael@0 | 378 | } |
michael@0 | 379 | while (...); |
michael@0 | 380 | |
michael@0 | 381 | - PIXMAN_COMPOSITE_RECT_GENERAL() is horribly complex |
michael@0 | 382 | |
michael@0 | 383 | - switch case logic in pixman-access.c |
michael@0 | 384 | |
michael@0 | 385 | Instead it would be better to just store function pointers in the |
michael@0 | 386 | image objects themselves, |
michael@0 | 387 | |
michael@0 | 388 | get_pixel() |
michael@0 | 389 | get_scanline() |
michael@0 | 390 | |
michael@0 | 391 | - Much of the scanline fetching code is for formats that no one |
michael@0 | 392 | ever uses. a2r2g2b2 anyone? |
michael@0 | 393 | |
michael@0 | 394 | It would probably be worthwhile having a generic fetcher for any |
michael@0 | 395 | pixman format whatsoever. |
michael@0 | 396 | |
michael@0 | 397 | - Code related to particular image types should be split into individual |
michael@0 | 398 | files. |
michael@0 | 399 | |
michael@0 | 400 | pixman-bits-image.c |
michael@0 | 401 | pixman-linear-gradient-image.c |
michael@0 | 402 | pixman-radial-gradient-image.c |
michael@0 | 403 | pixman-solid-image.c |
michael@0 | 404 | |
michael@0 | 405 | - Fast path code should be split into files based on architecture: |
michael@0 | 406 | |
michael@0 | 407 | pixman-mmx-fastpath.c |
michael@0 | 408 | pixman-sse2-fastpath.c |
michael@0 | 409 | pixman-c-fastpath.c |
michael@0 | 410 | |
michael@0 | 411 | etc. |
michael@0 | 412 | |
michael@0 | 413 | Each of these files should then export a fastpath table, which would |
michael@0 | 414 | be declared in pixman-private.h. This should allow us to get rid |
michael@0 | 415 | of the pixman-mmx.h files. |
michael@0 | 416 | |
michael@0 | 417 | The fast path table should describe each fast path. Ie there should |
michael@0 | 418 | be bitfields indicating what things the fast path can handle, rather than |
michael@0 | 419 | like now where it is only allowed to take one format per src/mask/dest. Ie., |
michael@0 | 420 | |
michael@0 | 421 | { |
michael@0 | 422 | FAST_a8r8g8b8 | FAST_x8r8g8b8, |
michael@0 | 423 | FAST_null, |
michael@0 | 424 | FAST_x8r8g8b8, |
michael@0 | 425 | FAST_repeat_normal | FAST_repeat_none, |
michael@0 | 426 | the_fast_path |
michael@0 | 427 | } |
michael@0 | 428 | |
michael@0 | 429 | There should then be *one* file that implements pixman_image_composite(). |
michael@0 | 430 | This should do this: |
michael@0 | 431 | |
michael@0 | 432 | optimize_operator(); |
michael@0 | 433 | |
michael@0 | 434 | convert 1x1 repeat to solid (actually this should be done at |
michael@0 | 435 | image creation time). |
michael@0 | 436 | |
michael@0 | 437 | is there a useful fastpath? |
michael@0 | 438 | |
michael@0 | 439 | There should be a file called pixman-cpu.c that contains all the |
michael@0 | 440 | architecture specific stuff to detect what CPU features we have. |
michael@0 | 441 | |
michael@0 | 442 | Issues that must be kept in mind: |
michael@0 | 443 | |
michael@0 | 444 | - we need accessor code to be preserved |
michael@0 | 445 | |
michael@0 | 446 | - maybe there should be a "store_scanline" too? |
michael@0 | 447 | |
michael@0 | 448 | Is this sufficient? |
michael@0 | 449 | |
michael@0 | 450 | We should preserve the optimization where the |
michael@0 | 451 | compositing happens directly in the destination |
michael@0 | 452 | whenever possible. |
michael@0 | 453 | |
michael@0 | 454 | - It should be possible to create GPU samplers from the |
michael@0 | 455 | images. |
michael@0 | 456 | |
michael@0 | 457 | The "horizontal" classification should be a bit in the image, the |
michael@0 | 458 | "vertical" classification should just happen inside the gradient |
michael@0 | 459 | file. Note though that |
michael@0 | 460 | |
michael@0 | 461 | (a) these will change if the tranformation/repeat changes. |
michael@0 | 462 | |
michael@0 | 463 | (b) at the moment the optimization for linear gradients |
michael@0 | 464 | takes the source rectangle into account. Presumably |
michael@0 | 465 | this is to also optimize the case where the gradient |
michael@0 | 466 | is close enough to horizontal? |
michael@0 | 467 | |
michael@0 | 468 | Who is responsible for repeats? In principle it should be the scanline |
michael@0 | 469 | fetch. Right now NORMAL repeats are handled by walk_composite_region() |
michael@0 | 470 | while other repeats are handled by the scanline code. |
michael@0 | 471 | |
michael@0 | 472 | |
michael@0 | 473 | (Random note on filtering: do you filter before or after |
michael@0 | 474 | transformation? Hardware is going to filter after transformation; |
michael@0 | 475 | this is also what pixman does currently). It's not completely clear |
michael@0 | 476 | what filtering *after* transformation means. One thing that might look |
michael@0 | 477 | good would be to do *supersampling*, ie., compute multiple subpixels |
michael@0 | 478 | per destination pixel, then average them together. |