![xquartz ssh forwarding xquartz ssh forwarding](http://blog.wenzlaff.de/wp-content/uploads/2015/11/Bildschirmfoto-2015-11-27-um-14.04.20.png)
And all sorts of eye-candy includes animated effects requiring frequent updates.
![xquartz ssh forwarding xquartz ssh forwarding](https://blog.shichao.io/_images/xquartz-freerdp.png)
Which was the case back in the days when X11 was first developed.īut modern GUI's have a lot of eye-candy and much of that needs to be send from the remote system to your client in the form of bitmaps/textures/fonts which take quite a lot of bandwidth. This is very efficient if the display to be rendered consists of a limited number of simple graphical shapes and only a low refresh frequency (no animations and such) is needed. The local client isn't really aware what needs to be updated and the remote system has very little information on what the client actually needs, so basically the server must send a lot of redundant information that the client may or may not need. So your computer receives a stream of instructions like "draw line in this color from coordinates x,y to (xx,yy), draw rectangle W pixels wide, H pixels high with upper-left corner at (x,y), etc." And this needs to be done on each change/refresh of the display. Back in the day when X11 was first designed computer graphics were a lot simpler than they are today.īasically X11 doesn't send the screen to your computer, but it sends the display-instructions so the X-server on your local computer can re-create the screen on your local system.
![xquartz ssh forwarding xquartz ssh forwarding](https://wangjunjian.com/images/2021/x11/ssh-x11-forwarding.png)
![xquartz ssh forwarding xquartz ssh forwarding](https://www.jwillikers.com/X11%20Forwarding%20GNOME%20Terminal%20on%20macOS%20Catalina.png)
The X11 protocol was never meant to handle graphically (in terms of bitmaps/textures) intensive operations. I know this is very simplified, but is X11 not using these methods? Does it behave in a bitmap-ish or a non-differential principle at some level? And if not, why does it take up so much bandwidth? In the case of videos, a whole lot of information can be saved by sending the difference between frames rather than the whole frames. png representation because information is not stored for every single pixel, but in a range-ish way as far as I understand. png, for example, a large black square image will take way less in. Why does it eat up so much of it? When it comes to picture and video formats, many methods are used to drastically reduce the size. I understand that as opposed to video streaming, the latency will be at best doubled (as the input needs to be sent to the remote machine and only after that can the appliction respond), but internally, are there other factors which increase the latency even further? On the other hand, the unresponsiveness of remotely launched GUIs with X11 forwarding happens even over a 100mbit LAN, where the latency should be near zero. With my 25mbit connection, I can stream HD video to my computer absolutely without a problem. My question is, what does, at the concept/protocol level cause this?