Discussion:
10-bit Mesa/Gallium support
(too old to reply)
Marek Olšák
2017-11-23 17:35:06 UTC
Permalink
Hi everybody,

Mario, feel free to push your patches if you haven't yet. (except the
workaround)

For AMD, I applied Mario's patches (except Wayland - that didn't
apply) and added initial Gallium support:
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit

What's the status of Glamor?

Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?

Thanks,
Marek
Ilia Mirkin
2017-11-23 17:45:43 UTC
Permalink
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?

-ilia
Michel Dänzer
2017-11-23 17:55:01 UTC
Permalink
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
First you'd need to add support for depth 30 to the Xorg driver you're
using.
Post by Ilia Mirkin
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned?
xdpyinfo or glxinfo should work for that.
Post by Ilia Mirkin
If used with a 24bpp display, is the hw supposed to dither somehow?
We don't know how your hardware works. :)
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
Ilia Mirkin
2017-11-23 18:07:14 UTC
Permalink
Post by Michel Dänzer
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
First you'd need to add support for depth 30 to the Xorg driver you're
using.
Right. That should be easy - either it's already there, or it should
be trivial to add. [Yeah, famous last words.] At least the drm driver
supports 30bpp fb's, so that's a start.
Post by Michel Dänzer
Post by Ilia Mirkin
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned?
xdpyinfo or glxinfo should work for that.
OK, so I'm looking at the output in my regular bgrx8888 mode:

depth of root window: 24 planes
default visual id: 0x21
visual:
visual id: 0x21
class: TrueColor
depth: 24 planes
available colormap entries: 256 per subfield
red, green, blue masks: 0xff0000, 0xff00, 0xff
significant bits in color specification: 8 bits

And presumably that would all say 30? And then in glxinfo the visual
depth would also become 30?
Post by Michel Dänzer
Post by Ilia Mirkin
If used with a 24bpp display, is the hw supposed to dither somehow?
We don't know how your hardware works. :)
I was hoping there was a common answer to that question. But I suppose
you're right. And we use a 256-sized LUT which is going to mess this
up bigtime... the later hardware does support larger ones... maybe.

Thanks,

-ilia
Marek Olšák
2017-11-23 18:22:56 UTC
Permalink
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?
glxinfo should print 10bit visuals and the glx piglit tests should test them.

Or you can convince waffle/gbm to give you a 10-bit off-screen back buffer.

Marek
Mario Kleiner
2017-11-23 18:31:35 UTC
Permalink
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
Hi,

just started 10 minutes ago with rebasing my current patchset against
mesa master. Will need some adjustments and retesting against i965.

I was also just "sort of done" with a mesa/gallium 10 bit version. I
think i'll submit rev 3 later today or tomorrow and maybe we'll need to
sort this out then, what goes where. I'll compare with Mareks branch...

The current state of my series for AMD here is that radeon-kms + ati-ddx
works nicely under exa (and with a slightly patched weston), but the
ati-ddx also needed some small patches which i have to send out. On
amdgpu-kms i know it works under my patched weston branch.

What is completely missing is glamor support, ergo support for at least
amdgpu-ddx and modesetting-ddx -- and xwayland.
Post by Ilia Mirkin
Post by Marek Olšák
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
Somewhere there must be some, as the amdgpu-pro driver with the
proprietary libGL supported depth 30 at least in some version i tested
earlier this year?
Post by Ilia Mirkin
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?x
-ilia
nouveau is quite a bit work to do and not so clear how to proceed.

My current series does do proper xrgb2101010 / argb2101010 rendering
under gallium on both nv50 and nvc0 (Tested under GeForce 9600 for
tesla, GTX 970 and 1050 for maxwell and pascal). I used PRIME render
offload under both DRI3/Present and Wayland/Weston with both intel and
amd as display gpus, so i know the drivers work together properly and
nouveau-gallium renders correctly.

The display side for native scanout on Nvidia is somewhat broken atm.:

1. Since Linux 4.10 with the switch of nouveau-kms to atomic
modesetting, using drmAddFB() with depth/bpp 30/32 maps to xrgb2101010
format, but nouveau-kms doesn't support xrgb2101010, so setting Xorg to
depth 30 will end in a server-abort with modesetting failure. nouveau
before Linux 4.10 mapped 30/32 to xbgr2101010 which seems to be
supported since nv50. If i boot with a < 4.10 kernel i get a picture at
least on the old GeForce 9600 and GT330M.

If i hack nouveau-ddx to use a xrgb2101010 color channel mask (red in
msb's, blue in lsbs) instead of the correct xbgr2101010 mask, then i can
get nouveau-gallium to render 10 bits, but of course with swapped red
and blue channels. Switching dithering on via xrandr allows to get
rendered 10 bit images to get to a 8 bpc display, as confirmed via
colorimeter. I hope a deep color TV might work without dithering.

According to

https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml

gpu's since kepler gk104 support xrgb2101010 scanout. With a hacked
nouveau-kms i can get the maxwell and pascal cards to accept
xrgb2101010, but the display is beyond weird. So far i couldn't make
much sense of the pixeltrash -- some of it remotely resembles a desktop,
but something is going wrong badly. Also the xbgr2101010 mode doesn't
work correct. The same is true for Wayland+Weston and even if i run
Weston with pixman, keeping Mesa out of the picture. So nouveau-kms
needs some work for all modern nvidia gpu's. Gamma table handling
changed quite a bit, so maybe something is wrong there.

2. We might also need some work for exa on nvc0+, but it's not clear
what problems are caused by kernel side, and what in exa.

3. In principle the clean solution for nouveau would be to upgrade the
ddx to drmAddFB2 ioctl, and use xbgr2101010 scanout to support
everything back to nv50+, but everything we have in X or Wayland is
meant for xrgb2101010 not xbgr2101010. And we run into ambiguities of
what, e.g., a depth 30 pixmap means in some extensions like
glx_texture_form_pixmap. And the GLX extension generally seems to have
unresolved problems with ARGB formats instead of ABGR formats, which is
why Mesa doesn't expose ARGB by default -- only on Android atm.

So on nouveau everything except the gallium bits is quite a bit messy at
the moment, but the gallium bits work according to my testing.

-mario
Ilia Mirkin
2017-11-23 18:44:05 UTC
Permalink
On Thu, Nov 23, 2017 at 1:31 PM, Mario Kleiner
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
Hi,
just started 10 minutes ago with rebasing my current patchset against mesa
master. Will need some adjustments and retesting against i965.
I was also just "sort of done" with a mesa/gallium 10 bit version. I think
i'll submit rev 3 later today or tomorrow and maybe we'll need to sort this
out then, what goes where. I'll compare with Mareks branch...
The current state of my series for AMD here is that radeon-kms + ati-ddx
works nicely under exa (and with a slightly patched weston), but the ati-ddx
also needed some small patches which i have to send out. On amdgpu-kms i
know it works under my patched weston branch.
What is completely missing is glamor support, ergo support for at least
amdgpu-ddx and modesetting-ddx -- and xwayland.
Post by Ilia Mirkin
Post by Marek Olšák
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
Somewhere there must be some, as the amdgpu-pro driver with the proprietary
libGL supported depth 30 at least in some version i tested earlier this
year?
Post by Ilia Mirkin
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?x
-ilia
nouveau is quite a bit work to do and not so clear how to proceed.
My current series does do proper xrgb2101010 / argb2101010 rendering under
gallium on both nv50 and nvc0 (Tested under GeForce 9600 for tesla, GTX 970
and 1050 for maxwell and pascal). I used PRIME render offload under both
DRI3/Present and Wayland/Weston with both intel and amd as display gpus, so
i know the drivers work together properly and nouveau-gallium renders
correctly.
1. Since Linux 4.10 with the switch of nouveau-kms to atomic modesetting,
using drmAddFB() with depth/bpp 30/32 maps to xrgb2101010 format, but
nouveau-kms doesn't support xrgb2101010, so setting Xorg to depth 30 will
end in a server-abort with modesetting failure. nouveau before Linux 4.10
mapped 30/32 to xbgr2101010 which seems to be supported since nv50. If i
boot with a < 4.10 kernel i get a picture at least on the old GeForce 9600
and GT330M.
If i hack nouveau-ddx to use a xrgb2101010 color channel mask (red in msb's,
blue in lsbs) instead of the correct xbgr2101010 mask, then i can get
nouveau-gallium to render 10 bits, but of course with swapped red and blue
channels. Switching dithering on via xrandr allows to get rendered 10 bit
images to get to a 8 bpc display, as confirmed via colorimeter. I hope a
deep color TV might work without dithering.
According to
https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml
gpu's since kepler gk104 support xrgb2101010 scanout. With a hacked
nouveau-kms i can get the maxwell and pascal cards to accept xrgb2101010,
but the display is beyond weird. So far i couldn't make much sense of the
pixeltrash -- some of it remotely resembles a desktop, but something is
going wrong badly. Also the xbgr2101010 mode doesn't work correct. The same
is true for Wayland+Weston and even if i run Weston with pixman, keeping
Mesa out of the picture. So nouveau-kms needs some work for all modern
nvidia gpu's. Gamma table handling changed quite a bit, so maybe something
is wrong there.
2. We might also need some work for exa on nvc0+, but it's not clear what
problems are caused by kernel side, and what in exa.
3. In principle the clean solution for nouveau would be to upgrade the ddx
to drmAddFB2 ioctl, and use xbgr2101010 scanout to support everything back
to nv50+, but everything we have in X or Wayland is meant for xrgb2101010
not xbgr2101010. And we run into ambiguities of what, e.g., a depth 30
pixmap means in some extensions like glx_texture_form_pixmap. And the GLX
extension generally seems to have unresolved problems with ARGB formats
instead of ABGR formats, which is why Mesa doesn't expose ARGB by default --
only on Android atm.
So on nouveau everything except the gallium bits is quite a bit messy at the
moment, but the gallium bits work according to my testing.
Sounds like you've gotten moderately far. I wasn't sure if you were
interested in working on this for nouveau. I can't seem to even get
modetest to show XB30 at all on a G92 nor on a GK208. Either way,
probably not relevant discussion for this list.

(Hm, looking at the code, it actually looks like it's modetest that's
fubar for rgb10 formats. So there's hope!)

-ilia
Mario Kleiner
2017-11-23 19:02:38 UTC
Permalink
Post by Ilia Mirkin
On Thu, Nov 23, 2017 at 1:31 PM, Mario Kleiner
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
Hi,
just started 10 minutes ago with rebasing my current patchset against mesa
master. Will need some adjustments and retesting against i965.
I was also just "sort of done" with a mesa/gallium 10 bit version. I think
i'll submit rev 3 later today or tomorrow and maybe we'll need to sort this
out then, what goes where. I'll compare with Mareks branch...
The current state of my series for AMD here is that radeon-kms + ati-ddx
works nicely under exa (and with a slightly patched weston), but the ati-ddx
also needed some small patches which i have to send out. On amdgpu-kms i
know it works under my patched weston branch.
What is completely missing is glamor support, ergo support for at least
amdgpu-ddx and modesetting-ddx -- and xwayland.
Post by Ilia Mirkin
Post by Marek Olšák
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
Somewhere there must be some, as the amdgpu-pro driver with the proprietary
libGL supported depth 30 at least in some version i tested earlier this
year?
Post by Ilia Mirkin
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?x
-ilia
nouveau is quite a bit work to do and not so clear how to proceed.
My current series does do proper xrgb2101010 / argb2101010 rendering under
gallium on both nv50 and nvc0 (Tested under GeForce 9600 for tesla, GTX 970
and 1050 for maxwell and pascal). I used PRIME render offload under both
DRI3/Present and Wayland/Weston with both intel and amd as display gpus, so
i know the drivers work together properly and nouveau-gallium renders
correctly.
1. Since Linux 4.10 with the switch of nouveau-kms to atomic modesetting,
using drmAddFB() with depth/bpp 30/32 maps to xrgb2101010 format, but
nouveau-kms doesn't support xrgb2101010, so setting Xorg to depth 30 will
end in a server-abort with modesetting failure. nouveau before Linux 4.10
mapped 30/32 to xbgr2101010 which seems to be supported since nv50. If i
boot with a < 4.10 kernel i get a picture at least on the old GeForce 9600
and GT330M.
If i hack nouveau-ddx to use a xrgb2101010 color channel mask (red in msb's,
blue in lsbs) instead of the correct xbgr2101010 mask, then i can get
nouveau-gallium to render 10 bits, but of course with swapped red and blue
channels. Switching dithering on via xrandr allows to get rendered 10 bit
images to get to a 8 bpc display, as confirmed via colorimeter. I hope a
deep color TV might work without dithering.
According to
https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml
gpu's since kepler gk104 support xrgb2101010 scanout. With a hacked
nouveau-kms i can get the maxwell and pascal cards to accept xrgb2101010,
but the display is beyond weird. So far i couldn't make much sense of the
pixeltrash -- some of it remotely resembles a desktop, but something is
going wrong badly. Also the xbgr2101010 mode doesn't work correct. The same
is true for Wayland+Weston and even if i run Weston with pixman, keeping
Mesa out of the picture. So nouveau-kms needs some work for all modern
nvidia gpu's. Gamma table handling changed quite a bit, so maybe something
is wrong there.
2. We might also need some work for exa on nvc0+, but it's not clear what
problems are caused by kernel side, and what in exa.
3. In principle the clean solution for nouveau would be to upgrade the ddx
to drmAddFB2 ioctl, and use xbgr2101010 scanout to support everything back
to nv50+, but everything we have in X or Wayland is meant for xrgb2101010
not xbgr2101010. And we run into ambiguities of what, e.g., a depth 30
pixmap means in some extensions like glx_texture_form_pixmap. And the GLX
extension generally seems to have unresolved problems with ARGB formats
instead of ABGR formats, which is why Mesa doesn't expose ARGB by default --
only on Android atm.
Mistyped, i actually meant BGRA/BGRX is fine, but RGBA/RGBX has
confusion problems in GLX, so that's only exposed on Android.
Post by Ilia Mirkin
So on nouveau everything except the gallium bits is quite a bit messy at the
moment, but the gallium bits work according to my testing.
Sounds like you've gotten moderately far. I wasn't sure if you were
interested in working on this for nouveau. I can't seem to even get
modetest to show XB30 at all on a G92 nor on a GK208. Either way,
probably not relevant discussion for this list.
I'm mostly stuck with nouveau, not sure how much time i'll have to look
into it atm. The Mesa/Gallium bits seem to work fine, so that's
something. Exa nvc0 i lack the knowledge, but i guess we'd first need to
get the kms driver working properly before we know if there is a problem
in exa at all.
Post by Ilia Mirkin
(Hm, looking at the code, it actually looks like it's modetest that's
fubar for rgb10 formats. So there's hope!)
Yes, something looked broken there as well. I side-stepped it with
weston + pixman 10 bit so far.

As far as i'm concerned it would be good to get all the Mesa bits and
then glamor and ddx'es ready asap, so maybe we have something somewhat
workable for XOrg 1.20, and then deal with kernel stuff.

-mario
Ilia Mirkin
2017-11-23 20:00:42 UTC
Permalink
On Thu, Nov 23, 2017 at 2:02 PM, Mario Kleiner
Post by Ilia Mirkin
On Thu, Nov 23, 2017 at 1:31 PM, Mario Kleiner
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
Hi,
just started 10 minutes ago with rebasing my current patchset against mesa
master. Will need some adjustments and retesting against i965.
I was also just "sort of done" with a mesa/gallium 10 bit version. I think
i'll submit rev 3 later today or tomorrow and maybe we'll need to sort this
out then, what goes where. I'll compare with Mareks branch...
The current state of my series for AMD here is that radeon-kms + ati-ddx
works nicely under exa (and with a slightly patched weston), but the ati-ddx
also needed some small patches which i have to send out. On amdgpu-kms i
know it works under my patched weston branch.
What is completely missing is glamor support, ergo support for at least
amdgpu-ddx and modesetting-ddx -- and xwayland.
Post by Ilia Mirkin
Post by Marek Olšák
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
Somewhere there must be some, as the amdgpu-pro driver with the proprietary
libGL supported depth 30 at least in some version i tested earlier this
year?
Post by Ilia Mirkin
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?x
-ilia
nouveau is quite a bit work to do and not so clear how to proceed.
My current series does do proper xrgb2101010 / argb2101010 rendering under
gallium on both nv50 and nvc0 (Tested under GeForce 9600 for tesla, GTX 970
and 1050 for maxwell and pascal). I used PRIME render offload under both
DRI3/Present and Wayland/Weston with both intel and amd as display gpus, so
i know the drivers work together properly and nouveau-gallium renders
correctly.
1. Since Linux 4.10 with the switch of nouveau-kms to atomic modesetting,
using drmAddFB() with depth/bpp 30/32 maps to xrgb2101010 format, but
nouveau-kms doesn't support xrgb2101010, so setting Xorg to depth 30 will
end in a server-abort with modesetting failure. nouveau before Linux 4.10
mapped 30/32 to xbgr2101010 which seems to be supported since nv50. If i
boot with a < 4.10 kernel i get a picture at least on the old GeForce 9600
and GT330M.
If i hack nouveau-ddx to use a xrgb2101010 color channel mask (red in msb's,
blue in lsbs) instead of the correct xbgr2101010 mask, then i can get
nouveau-gallium to render 10 bits, but of course with swapped red and blue
channels. Switching dithering on via xrandr allows to get rendered 10 bit
images to get to a 8 bpc display, as confirmed via colorimeter. I hope a
deep color TV might work without dithering.
According to
https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml
gpu's since kepler gk104 support xrgb2101010 scanout. With a hacked
nouveau-kms i can get the maxwell and pascal cards to accept xrgb2101010,
but the display is beyond weird. So far i couldn't make much sense of the
pixeltrash -- some of it remotely resembles a desktop, but something is
going wrong badly. Also the xbgr2101010 mode doesn't work correct. The same
is true for Wayland+Weston and even if i run Weston with pixman, keeping
Mesa out of the picture. So nouveau-kms needs some work for all modern
nvidia gpu's. Gamma table handling changed quite a bit, so maybe something
is wrong there.
2. We might also need some work for exa on nvc0+, but it's not clear what
problems are caused by kernel side, and what in exa.
3. In principle the clean solution for nouveau would be to upgrade the ddx
to drmAddFB2 ioctl, and use xbgr2101010 scanout to support everything back
to nv50+, but everything we have in X or Wayland is meant for xrgb2101010
not xbgr2101010. And we run into ambiguities of what, e.g., a depth 30
pixmap means in some extensions like glx_texture_form_pixmap. And the GLX
extension generally seems to have unresolved problems with ARGB formats
instead of ABGR formats, which is why Mesa doesn't expose ARGB by default --
only on Android atm.
Mistyped, i actually meant BGRA/BGRX is fine, but RGBA/RGBX has confusion
problems in GLX, so that's only exposed on Android.
Well, the hardware supports what it supports. Not much we can do about
that. I thought the color masks were supposed to avoid confusion, i.e.
if it's 30bpp and red is 0x3ff, then it's one, if red is 0x3ff00000
then it's the other.
Post by Ilia Mirkin
So on nouveau everything except the gallium bits is quite a bit messy at the
moment, but the gallium bits work according to my testing.
Sounds like you've gotten moderately far. I wasn't sure if you were
interested in working on this for nouveau. I can't seem to even get
modetest to show XB30 at all on a G92 nor on a GK208. Either way,
probably not relevant discussion for this list.
I'm mostly stuck with nouveau, not sure how much time i'll have to look into
it atm. The Mesa/Gallium bits seem to work fine, so that's something. Exa
nvc0 i lack the knowledge, but i guess we'd first need to get the kms driver
working properly before we know if there is a problem in exa at all.
If you could post your hackpatches, I could take it from there. I
wouldn't be surprised if the EXA callbacks needed a little help to
render to rgb10a2 surfaces (or bgr10a2). There's a lot of hardcoding
in there.
Post by Ilia Mirkin
(Hm, looking at the code, it actually looks like it's modetest that's
fubar for rgb10 formats. So there's hope!)
Yes, something looked broken there as well. I side-stepped it with weston +
pixman 10 bit so far.
I sent a fix for it a few minutes ago. Works fine now (or at least ...
a lot less obviously wrong than "black screen"). I still think that
there are LUT issues since we always go through a 256-based color LUT.
There are options to make that 1024 or higher though. Perhaps the LUT
gets auto-disabled when you do a 10bpp surface but have the 256-sized
LUT size enabled? Needs some experimentation.

-ilia
Tapani Pälli
2017-11-24 07:13:23 UTC
Permalink
Post by Mario Kleiner
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
Hi,
just started 10 minutes ago with rebasing my current patchset against
mesa master. Will need some adjustments and retesting against i965.
I have gone through i965 bits and everything looks good to me. The
reason I haven't gone forward with review is that KDE Plasma desktop is
currently broken due to my patch exposing SRGB visuals and I wanted to
try to solve that first.

However looks like that is going to take longer than expected so FWIW I
have tested KDE with your patches rebased on top (all but wayland ones)
and I did not see any 'extra issues' happening with KDE Plasma, just the
ones that exist already.
Post by Mario Kleiner
I was also just "sort of done" with a mesa/gallium 10 bit version. I
think i'll submit rev 3 later today or tomorrow and maybe we'll need to
sort this out then, what goes where. I'll compare with Mareks branch...
The current state of my series for AMD here is that radeon-kms + ati-ddx
works nicely under exa (and with a slightly patched weston), but the
ati-ddx also needed some small patches which i have to send out. On
amdgpu-kms i know it works under my patched weston branch.
What is completely missing is glamor support, ergo support for at least
amdgpu-ddx and modesetting-ddx -- and xwayland.
Post by Ilia Mirkin
Post by Marek Olšák
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
Somewhere there must be some, as the amdgpu-pro driver with the
proprietary libGL supported depth 30 at least in some version i tested
earlier this year?
Post by Ilia Mirkin
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?x
   -ilia
nouveau is quite a bit work to do and not so clear how to proceed.
My current series does do proper xrgb2101010 / argb2101010 rendering
under gallium on both nv50 and nvc0 (Tested under GeForce 9600 for
tesla, GTX 970 and 1050 for maxwell and pascal). I used PRIME render
offload under both DRI3/Present and Wayland/Weston with both intel and
amd as display gpus, so i know the drivers work together properly and
nouveau-gallium renders correctly.
1. Since Linux 4.10 with the switch of nouveau-kms to atomic
modesetting, using drmAddFB() with depth/bpp 30/32 maps to xrgb2101010
format, but nouveau-kms doesn't support xrgb2101010, so setting Xorg to
depth 30 will end in a server-abort with modesetting failure. nouveau
before Linux 4.10 mapped 30/32 to xbgr2101010 which seems to be
supported since nv50. If i boot with a < 4.10 kernel i get a picture at
least on the old GeForce 9600 and GT330M.
If i hack nouveau-ddx to use a xrgb2101010 color channel mask (red in
msb's, blue in lsbs) instead of the correct xbgr2101010 mask, then i can
get nouveau-gallium to render 10 bits, but of course with swapped red
and blue channels. Switching dithering on via xrandr allows to get
rendered 10 bit images to get to a 8 bpc display, as confirmed via
colorimeter. I hope a deep color TV might work without dithering.
According to
https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml
gpu's since kepler gk104 support xrgb2101010 scanout. With a hacked
nouveau-kms i can get the maxwell and pascal cards to accept
xrgb2101010, but the display is beyond weird. So far i couldn't make
much sense of the pixeltrash -- some of it remotely resembles a desktop,
but something is going wrong badly. Also the xbgr2101010 mode doesn't
work correct. The same is true for Wayland+Weston and even if i run
Weston with pixman, keeping Mesa out of the picture. So nouveau-kms
needs some work for all modern nvidia gpu's. Gamma table handling
changed quite a bit, so maybe something is wrong there.
2. We might also need some work for exa on nvc0+, but it's not clear
what problems are caused by kernel side, and what in exa.
3. In principle the clean solution for nouveau would be to upgrade the
ddx to drmAddFB2 ioctl, and use xbgr2101010 scanout to support
everything back to nv50+, but everything we have in X or Wayland is
meant for xrgb2101010 not xbgr2101010. And we run into ambiguities of
what, e.g., a depth 30 pixmap means in some extensions like
glx_texture_form_pixmap. And the GLX extension generally seems to have
unresolved problems with ARGB formats instead of ABGR formats, which is
why Mesa doesn't expose ARGB by default -- only on Android atm.
So on nouveau everything except the gallium bits is quite a bit messy at
the moment, but the gallium bits work according to my testing.
-mario
_______________________________________________
mesa-dev mailing list
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Michel Dänzer
2017-11-24 09:23:17 UTC
Permalink
On 2017-11-23 07:31 PM, Mario Kleiner wrote:> > 3. In principle the
clean solution for nouveau would be to upgrade the> ddx to drmAddFB2
ioctl, and use xbgr2101010 scanout to support> everything back to nv50+,
but everything we have in X or Wayland is> meant for xrgb2101010 not
xbgr2101010. And we run into ambiguities of> what, e.g., a depth 30
pixmap means in some extensions like> glx_texture_form_pixmap.
A pixmap itself never has a format per se, it's just a container for an
n-bit integer value per pixel (where n is the pixmap depth). A
compositor using GLX_EXT_texture_from_pixmap has to determine the format
from the corresponding window's visual.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
Michel Dänzer
2017-11-24 09:44:08 UTC
Permalink
Post by Mario Kleiner
3. In principle the clean solution for nouveau would be to upgrade the
ddx to drmAddFB2 ioctl, and use xbgr2101010 scanout to support
everything back to nv50+, but everything we have in X or Wayland is
meant for xrgb2101010 not xbgr2101010. And we run into ambiguities of
what, e.g., a depth 30 pixmap means in some extensions like
glx_texture_form_pixmap.
A pixmap itself never has a format per se, it's just a container for an
n-bit integer value per pixel (where n is the pixmap depth). A
compositor using GLX_EXT_texture_from_pixmap has to determine the format
from the corresponding window's visual.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
Ilia Mirkin
2017-12-31 16:53:19 UTC
Permalink
On Thu, Nov 23, 2017 at 1:31 PM, Mario Kleiner
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
Hi,
just started 10 minutes ago with rebasing my current patchset against mesa
master. Will need some adjustments and retesting against i965.
I was also just "sort of done" with a mesa/gallium 10 bit version. I think
i'll submit rev 3 later today or tomorrow and maybe we'll need to sort this
out then, what goes where. I'll compare with Mareks branch...
The current state of my series for AMD here is that radeon-kms + ati-ddx
works nicely under exa (and with a slightly patched weston), but the ati-ddx
also needed some small patches which i have to send out. On amdgpu-kms i
know it works under my patched weston branch.
What is completely missing is glamor support, ergo support for at least
amdgpu-ddx and modesetting-ddx -- and xwayland.
Post by Ilia Mirkin
Post by Marek Olšák
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
Somewhere there must be some, as the amdgpu-pro driver with the proprietary
libGL supported depth 30 at least in some version i tested earlier this
year?
Post by Ilia Mirkin
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?x
-ilia
nouveau is quite a bit work to do and not so clear how to proceed.
My current series does do proper xrgb2101010 / argb2101010 rendering under
gallium on both nv50 and nvc0 (Tested under GeForce 9600 for tesla, GTX 970
and 1050 for maxwell and pascal). I used PRIME render offload under both
DRI3/Present and Wayland/Weston with both intel and amd as display gpus, so
i know the drivers work together properly and nouveau-gallium renders
correctly.
1. Since Linux 4.10 with the switch of nouveau-kms to atomic modesetting,
using drmAddFB() with depth/bpp 30/32 maps to xrgb2101010 format, but
nouveau-kms doesn't support xrgb2101010, so setting Xorg to depth 30 will
end in a server-abort with modesetting failure. nouveau before Linux 4.10
mapped 30/32 to xbgr2101010 which seems to be supported since nv50. If i
boot with a < 4.10 kernel i get a picture at least on the old GeForce 9600
and GT330M.
If i hack nouveau-ddx to use a xrgb2101010 color channel mask (red in msb's,
blue in lsbs) instead of the correct xbgr2101010 mask, then i can get
nouveau-gallium to render 10 bits, but of course with swapped red and blue
channels. Switching dithering on via xrandr allows to get rendered 10 bit
images to get to a 8 bpc display, as confirmed via colorimeter. I hope a
deep color TV might work without dithering.
According to
https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml
gpu's since kepler gk104 support xrgb2101010 scanout. With a hacked
nouveau-kms i can get the maxwell and pascal cards to accept xrgb2101010,
but the display is beyond weird. So far i couldn't make much sense of the
pixeltrash -- some of it remotely resembles a desktop, but something is
going wrong badly. Also the xbgr2101010 mode doesn't work correct. The same
is true for Wayland+Weston and even if i run Weston with pixman, keeping
Mesa out of the picture. So nouveau-kms needs some work for all modern
nvidia gpu's. Gamma table handling changed quite a bit, so maybe something
is wrong there.
2. We might also need some work for exa on nvc0+, but it's not clear what
problems are caused by kernel side, and what in exa.
3. In principle the clean solution for nouveau would be to upgrade the ddx
to drmAddFB2 ioctl, and use xbgr2101010 scanout to support everything back
to nv50+, but everything we have in X or Wayland is meant for xrgb2101010
not xbgr2101010. And we run into ambiguities of what, e.g., a depth 30
pixmap means in some extensions like glx_texture_form_pixmap. And the GLX
extension generally seems to have unresolved problems with ARGB formats
instead of ABGR formats, which is why Mesa doesn't expose ARGB by default --
only on Android atm.
So on nouveau everything except the gallium bits is quite a bit messy at the
moment, but the gallium bits work according to my testing.
I went through and added support for xbgr2101010 throughout (and
hacked the kernel to assume that 30bpp == that).

Your (I hope latest) patches + my mesa patches are available at
https://github.com/imirkin/mesa/commits/30bpp . It all seems to not be
totally broken, at least glxgears colors are right with X -depth 30.
And rendercheck seems not extremely unhappy either (with
xf86-video-nouveau at least). DRI3 seems broken for some reason that I
haven't determined yet (client stops sending stuff at the protocol
level after the Open call, but I'm weak on what the protocol _should_
be), but DRI2 is totally fine. (And DRI3 is apparently
broken-by-design anyways on all EXA drivers I guess, so I'm not going
to loose too much sleep over it.)

Like you said, Kepler+ supports ARGB, but Tesla and Fermi are stuck
with ABGR. If possible (which in this case it seems to be), I'd rather
maximise the amount of hardware this can all work with.

If you do decide to play with it, on top of the hack to make 30bpp ==
xbgr, if you want to use GF119+, you also need to apply
https://lists.freedesktop.org/archives/nouveau/2017-December/029408.html
-- I'm going to send out an email about the whole LUT situation, but
the backwards-compatibility makes it all rather annoying. I suspect
that without either disabling the LUT (which would be easy to hack
nv50_display.c to do), or using a 1024-entry LUT, you won't get actual
10bpp per channel, still just 8bpp. But can't really tell from just
looking at the SMPTE pattern on my fading LCD monitor :) The
1024-entry LUT is only available on GF119+, so I'm curious whether
earlier chips can output 10bpp at all, or if it all gets chopped down
to 8bpp "on the wire" even if LUT is disabled. Perhaps you can
experiment with a colorimeter and known-good display if you get some
time.

Cheers,

-ilia
Mario Kleiner
2018-01-03 04:51:23 UTC
Permalink
Post by Ilia Mirkin
On Thu, Nov 23, 2017 at 1:31 PM, Mario Kleiner
Post by Ilia Mirkin
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
Hi,
just started 10 minutes ago with rebasing my current patchset against mesa
master. Will need some adjustments and retesting against i965.
I was also just "sort of done" with a mesa/gallium 10 bit version. I think
i'll submit rev 3 later today or tomorrow and maybe we'll need to sort this
out then, what goes where. I'll compare with Mareks branch...
The current state of my series for AMD here is that radeon-kms + ati-ddx
works nicely under exa (and with a slightly patched weston), but the ati-ddx
also needed some small patches which i have to send out. On amdgpu-kms i
know it works under my patched weston branch.
What is completely missing is glamor support, ergo support for at least
amdgpu-ddx and modesetting-ddx -- and xwayland.
Post by Ilia Mirkin
Post by Marek Olšák
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Do we have patches for xf86-video-amdgpu? The closed should have
10-bit support, meaning we should have DDX patches already somewhere,
right?
Somewhere there must be some, as the amdgpu-pro driver with the proprietary
libGL supported depth 30 at least in some version i tested earlier this
year?
Post by Ilia Mirkin
I'd like to test this out with nouveau as well... do I understand
correctly that I shouldn't need anything special to check if it
basically works? i.e. I apply the patches, start Xorg in bpp=30 mode,
and then if glxgears works then I'm done? Is there a good way that I'm
really in 30bpp mode as far as all the software is concerned? (I don't
have a colorimeter or whatever fancy hw to *really* tell the
difference, although I do have a "deep color" TV.) If used with a
24bpp display, is the hw supposed to dither somehow?x
-ilia
nouveau is quite a bit work to do and not so clear how to proceed.
My current series does do proper xrgb2101010 / argb2101010 rendering under
gallium on both nv50 and nvc0 (Tested under GeForce 9600 for tesla, GTX 970
and 1050 for maxwell and pascal). I used PRIME render offload under both
DRI3/Present and Wayland/Weston with both intel and amd as display gpus, so
i know the drivers work together properly and nouveau-gallium renders
correctly.
1. Since Linux 4.10 with the switch of nouveau-kms to atomic modesetting,
using drmAddFB() with depth/bpp 30/32 maps to xrgb2101010 format, but
nouveau-kms doesn't support xrgb2101010, so setting Xorg to depth 30 will
end in a server-abort with modesetting failure. nouveau before Linux 4.10
mapped 30/32 to xbgr2101010 which seems to be supported since nv50. If i
boot with a < 4.10 kernel i get a picture at least on the old GeForce 9600
and GT330M.
If i hack nouveau-ddx to use a xrgb2101010 color channel mask (red in msb's,
blue in lsbs) instead of the correct xbgr2101010 mask, then i can get
nouveau-gallium to render 10 bits, but of course with swapped red and blue
channels. Switching dithering on via xrandr allows to get rendered 10 bit
images to get to a 8 bpc display, as confirmed via colorimeter. I hope a
deep color TV might work without dithering.
According to
https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml
gpu's since kepler gk104 support xrgb2101010 scanout. With a hacked
nouveau-kms i can get the maxwell and pascal cards to accept xrgb2101010,
but the display is beyond weird. So far i couldn't make much sense of the
pixeltrash -- some of it remotely resembles a desktop, but something is
going wrong badly. Also the xbgr2101010 mode doesn't work correct. The same
is true for Wayland+Weston and even if i run Weston with pixman, keeping
Mesa out of the picture. So nouveau-kms needs some work for all modern
nvidia gpu's. Gamma table handling changed quite a bit, so maybe something
is wrong there.
2. We might also need some work for exa on nvc0+, but it's not clear what
problems are caused by kernel side, and what in exa.
3. In principle the clean solution for nouveau would be to upgrade the ddx
to drmAddFB2 ioctl, and use xbgr2101010 scanout to support everything back
to nv50+, but everything we have in X or Wayland is meant for xrgb2101010
not xbgr2101010. And we run into ambiguities of what, e.g., a depth 30
pixmap means in some extensions like glx_texture_form_pixmap. And the GLX
extension generally seems to have unresolved problems with ARGB formats
instead of ABGR formats, which is why Mesa doesn't expose ARGB by default --
only on Android atm.
So on nouveau everything except the gallium bits is quite a bit messy at the
moment, but the gallium bits work according to my testing.
I went through and added support for xbgr2101010 throughout (and
hacked the kernel to assume that 30bpp == that).
Cool!
Post by Ilia Mirkin
Your (I hope latest) patches + my mesa patches are available at
https://github.com/imirkin/mesa/commits/30bpp . It all seems to not be
Yes, those are the latest patches. It would be great if somebody could
push them to master now if there aren't any objections, given that they
are all tested, and reviewed by Marek and Tapani, and as far as i'm
concerned finished at least for intel and amd. Having a new baseline to
work off without rebasing and manually resolving frequent merge
conflicts would be nice.
Post by Ilia Mirkin
totally broken, at least glxgears colors are right with X -depth 30.
And rendercheck seems not extremely unhappy either (with
xf86-video-nouveau at least). DRI3 seems broken for some reason that I
haven't determined yet (client stops sending stuff at the protocol
level after the Open call, but I'm weak on what the protocol _should_
be), but DRI2 is totally fine. (And DRI3 is apparently
broken-by-design anyways on all EXA drivers I guess, so I'm not going
to loose too much sleep over it.)
I'll give them a try when i get back to my machines and colorimeter
sometime within the next couple of days.
Post by Ilia Mirkin
Like you said, Kepler+ supports ARGB, but Tesla and Fermi are stuck
with ABGR. If possible (which in this case it seems to be), I'd rather
maximise the amount of hardware this can all work with.
Having it working on older gpu's would be nice - most of mine are still
tesla. I assume the downside is Prime renderoffload with intel + nvidia
won't automagically work anymore? Unless we can do a format conversion
xbgr -> xrgb during the blitImage op that converts from tiled
renderoffload gpu format to the linear scanout format of the display
gpu? Or make the exported format dependent on what the display gpu prefers.
Post by Ilia Mirkin
If you do decide to play with it, on top of the hack to make 30bpp ==
xbgr, if you want to use GF119+, you also need to apply
https://lists.freedesktop.org/archives/nouveau/2017-December/029408.html
Do i understand that patch correctly, that the current code for gf119+
enabled a 1024 slot lut, but only initialized the first 256 slots? And
then a 8 bpc fb would only index the properly initialized slots, but the
10 bpc fb also indexed into the 768 uninitialized slots? And your patch
switches back to pure 256 slot lut? That would explain the kind of
corruption i've seen on maxwell and pascal which made things untestable
on those cards.
Post by Ilia Mirkin
-- I'm going to send out an email about the whole LUT situation, but
the backwards-compatibility makes it all rather annoying. I suspect
that without either disabling the LUT (which would be easy to hack
nv50_display.c to do), or using a 1024-entry LUT, you won't get actual
10bpp per channel, still just 8bpp. But can't really tell from just
looking at the SMPTE pattern on my fading LCD monitor :) The
1024-entry LUT is only available on GF119+, so I'm curious whether
earlier chips can output 10bpp at all, or if it all gets chopped down
to 8bpp "on the wire" even if LUT is disabled. Perhaps you can
experiment with a colorimeter and known-good display if you get some
time.
I enabled the spatial dithering to 8 bpc via xrandr props, outputting to
two different 8 bpc panels via DVI and HDMI iirc and measuring that i
got 10 bpc on GeForce 9600 and 330M, so that worked fine enough to at
least convince the colorimeter and my eyes. Years ago i also measured
native VGA out to a CRT via DVI-I with the builtin 10 bit dacs of a
GeForce 8800 with the proprietary driver.

I could misremember, with my poor memory and all the testing i did
lately on different gpu's, but i think what happened on nvidia with the
nouveau programmed lut's (also under linux 4.4 with the old lut setup
code) was that the lut's at least on those tesla cards got bypassed for
a 10 bpc fb. I have a test that cycles the lut's for some old-style clut
animation, and the picture stayed static in 10 bpc mode. I will check
again when i'm back at the machines.

Bedtime here, more later.
-mario
Post by Ilia Mirkin
Cheers,
-ilia
Ilia Mirkin
2018-01-03 21:25:21 UTC
Permalink
On Tue, Jan 2, 2018 at 11:51 PM, Mario Kleiner
Post by Ilia Mirkin
I went through and added support for xbgr2101010 throughout (and
hacked the kernel to assume that 30bpp == that).
Cool!
Post by Ilia Mirkin
Your (I hope latest) patches + my mesa patches are available at
https://github.com/imirkin/mesa/commits/30bpp . It all seems to not be
Yes, those are the latest patches. It would be great if somebody could push
them to master now if there aren't any objections, given that they are all
tested, and reviewed by Marek and Tapani, and as far as i'm concerned
finished at least for intel and amd. Having a new baseline to work off
without rebasing and manually resolving frequent merge conflicts would be
nice.
Please try to get Marek to merge it. I wouldn't feel comfortable doing
it until it was all worked out on NVIDIA, and I think you're much
closer on other hardware.
Post by Ilia Mirkin
totally broken, at least glxgears colors are right with X -depth 30.
And rendercheck seems not extremely unhappy either (with
xf86-video-nouveau at least). DRI3 seems broken for some reason that I
haven't determined yet (client stops sending stuff at the protocol
level after the Open call, but I'm weak on what the protocol _should_
be), but DRI2 is totally fine. (And DRI3 is apparently
broken-by-design anyways on all EXA drivers I guess, so I'm not going
to loose too much sleep over it.)
I'll give them a try when i get back to my machines and colorimeter sometime
within the next couple of days.
Post by Ilia Mirkin
Like you said, Kepler+ supports ARGB, but Tesla and Fermi are stuck
with ABGR. If possible (which in this case it seems to be), I'd rather
maximise the amount of hardware this can all work with.
Having it working on older gpu's would be nice - most of mine are still
tesla. I assume the downside is Prime renderoffload with intel + nvidia
won't automagically work anymore? Unless we can do a format conversion xbgr
-> xrgb during the blitImage op that converts from tiled renderoffload gpu
format to the linear scanout format of the display gpu? Or make the exported
format dependent on what the display gpu prefers.
I hadn't considered PRIME offload. I was so happy when the spinning
gears had the correct colors... In theory, the NVIDIA gpu will happily
render to either ABGR or ARGB. It can only scan out one of those, but
assuming that my fiddling with flags doesn't mess up the infra's
ability to allocate a properly-formatted buffer to pass in for
rendering, it should all be fine.
Post by Ilia Mirkin
If you do decide to play with it, on top of the hack to make 30bpp ==
xbgr, if you want to use GF119+, you also need to apply
https://lists.freedesktop.org/archives/nouveau/2017-December/029408.html
Do i understand that patch correctly, that the current code for gf119+
enabled a 1024 slot lut, but only initialized the first 256 slots? And then
More like every 4th slot, which worked just fine for 8bpc buffers.
a 8 bpc fb would only index the properly initialized slots, but the 10 bpc
fb also indexed into the 768 uninitialized slots? And your patch switches
back to pure 256 slot lut? That would explain the kind of corruption i've
seen on maxwell and pascal which made things untestable on those cards.
Correct. Ideally we'd want to use a 1024-position LUT, but legacy API
/ software sadness prevents it for now. There must be a clean way out
of all this, but it'll be more involved -- I took the quick way
initially, esp since it's no worse than what we're doing today.
Post by Ilia Mirkin
-- I'm going to send out an email about the whole LUT situation, but
the backwards-compatibility makes it all rather annoying. I suspect
that without either disabling the LUT (which would be easy to hack
nv50_display.c to do), or using a 1024-entry LUT, you won't get actual
10bpp per channel, still just 8bpp. But can't really tell from just
looking at the SMPTE pattern on my fading LCD monitor :) The
1024-entry LUT is only available on GF119+, so I'm curious whether
earlier chips can output 10bpp at all, or if it all gets chopped down
to 8bpp "on the wire" even if LUT is disabled. Perhaps you can
experiment with a colorimeter and known-good display if you get some
time.
I enabled the spatial dithering to 8 bpc via xrandr props, outputting to two
different 8 bpc panels via DVI and HDMI iirc and measuring that i got 10 bpc
on GeForce 9600 and 330M, so that worked fine enough to at least convince
the colorimeter and my eyes. Years ago i also measured native VGA out to a
CRT via DVI-I with the builtin 10 bit dacs of a GeForce 8800 with the
proprietary driver.
I could misremember, with my poor memory and all the testing i did lately on
different gpu's, but i think what happened on nvidia with the nouveau
programmed lut's (also under linux 4.4 with the old lut setup code) was that
the lut's at least on those tesla cards got bypassed for a 10 bpc fb. I have
a test that cycles the lut's for some old-style clut animation, and the
picture stayed static in 10 bpc mode. I will check again when i'm back at
the machines.
Well, if it just auto-skips the LUT, I can test that -- just invert
the LUT and that should be pretty obvious even to the naked eye on a
fading monitor :) But if it doesn't, then we could also make it skip
by explicitly disabling the LUT. Or allow some way for userspace to
indicate that it should skip.

From some follow-on discussions with Dave Airlie, 10bpc with DVI
requires hacks since it natively only supports 8bpc -- you have to
stick the extra bits "somewhere else", which requires an implicit
convention between the monitor and the display image producer. With
HDMI 1.1 / DP, you can have InfoFrames which specify 10bpc, although I
don't think that nouveau takes advantage of this.

So you won't get true 10bpc for now.

By the way, if you have any advice for how I can test the "trueness"
of the 10bpc with the naked eye, I do have a "deep color" TV with HDMI
inputs. I'm guessing some kind of fine gradient which will show
banding at 8bpp?

Cheers,

-ilia

Adam Jackson
2017-11-27 22:01:15 UTC
Permalink
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
Probably not working at all:

https://cgit.freedesktop.org/xorg/xserver/tree/glamor/glamor_transfer.c#n26

At best I'd expect things to be largely unaccelerated. At the moment
glamor assumes pictures are either [ax]rgb8888 or a8 because it doesn't
know about ARB_texture_view/ARB_texture_storage/ARB_sampler_objects.
The last, at least, has a work-in-progress branch here:

https://github.com/anholt/xserver/tree/glamor-sampler-objects

- ajax
Alex Deucher
2017-11-27 22:05:10 UTC
Permalink
Post by Adam Jackson
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
https://cgit.freedesktop.org/xorg/xserver/tree/glamor/glamor_transfer.c#n26
I think we have a branch internally somewhere with support added to
glamor. I though Nicolai had sent it out a while ago, but maybe I'm
mixing it up with something else.

Alex
Post by Adam Jackson
At best I'd expect things to be largely unaccelerated. At the moment
glamor assumes pictures are either [ax]rgb8888 or a8 because it doesn't
know about ARB_texture_view/ARB_texture_storage/ARB_sampler_objects.
https://github.com/anholt/xserver/tree/glamor-sampler-objects
- ajax
_______________________________________________
mesa-dev mailing list
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Nicolai Hähnle
2017-11-29 09:07:43 UTC
Permalink
Post by Alex Deucher
Post by Adam Jackson
Post by Marek Olšák
Hi everybody,
Mario, feel free to push your patches if you haven't yet. (except the
workaround)
For AMD, I applied Mario's patches (except Wayland - that didn't
https://cgit.freedesktop.org/~mareko/mesa/log/?h=10bit
What's the status of Glamor?
https://cgit.freedesktop.org/xorg/xserver/tree/glamor/glamor_transfer.c#n26
I think we have a branch internally somewhere with support added to
glamor. I though Nicolai had sent it out a while ago, but maybe I'm
mixing it up with something else.
The patches in question are attached. I thought I had a cleaned up
version somewhere, but it doesn't really matter much either way since it
needs to be cleaned up again for Mario's work anyway (not using
MESA_drm_image, I suppose -- I haven't looked at this in quite some
time). But with these patches it used to work at one point :)

Cheers,
Nicolai
Post by Alex Deucher
Alex
Post by Adam Jackson
At best I'd expect things to be largely unaccelerated. At the moment
glamor assumes pictures are either [ax]rgb8888 or a8 because it doesn't
know about ARB_texture_view/ARB_texture_storage/ARB_sampler_objects.
https://github.com/anholt/xserver/tree/glamor-sampler-objects
- ajax
_______________________________________________
mesa-dev mailing list
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
--
Lerne, wie die Welt wirklich ist,
Aber vergiss niemals, wie sie sein sollte.
Loading...