• December 21, 2024

Chrome Headless Webrtc

How to make a headless robot to test WebRTC in your Daily app

We get asked how to create fake participants in a call fairly often. It’s on par with herding cats to coordinate three real-live human devs to each open four Chrome tabs to simulate twelve call participants. “Oh, wait, this test is running on localhost, let me set up an ngrok tunnel… 10 min later ok, try now. Oh, now Vanessa is at lunch. Great, we’ll pick it back up when she’s back. ” Three hours later, you have almost confirmed half of your app works with six participants, if all participants are on your machine. This is where Robots (our affectionate name for automated WebDriver instances) step in and save the day, or at the very least, a non-trivial number of hours.
The W3C working draft spec defines WebDriver as “a remote control interface that enables introspection and control of user agents. ” a. k. a. a Robot. Most browsers implement this spec, with chromedriver, safaridriver, and geckodriver (Firefox) being the most common examples. There are quite a few libraries and frameworks out there leveraging these drivers to automate browsers for testing, fun, and profit.
This post will go over how to spin up WebRTC-friendly headless Chromium instances, locally or in “the cloud”, to create fake participants. Skip to the But I want to use Node section for a Selenium example (implemented in, you guessed it, NodeJS).
Just the facts, ma’am
If testing in Chrome is all you need and you are CLI inclined, here’s the TL;DR:
if MACOS
chromerobot = “/Applications/Google Chrome”
else
chromerobot = google-chrome-stable
chromerobot
–headless
–no-sandbox
–disable-gpu
–disable-sync
–no-first-run
–disable-dev-shm-usage
–user-data-dir=/tmp/chrome
–remote-debugging-port=9222
–use-fake-ui-for-media-stream
–use-fake-device-for-media-stream
–autoplay-policy=no-user-gesture-required
–allow-file-access-from-files
–use-file-for-fake-video-capture=/full/path/to/daily. y4m
–use-file-for-fake-audio-capture=/your/favorite/

All explanations can be found here but for the highlights:
–headless: no UI & no display dependencies
–disable-gpu: disable hardware acceleration
–disable-sync: don’t sync browser data to a user account
–disable-dev-shm-usage: useful in docker containers
–use-fake-ui-for-media-stream: bypasses this Chrome cam/mic permissions dialog
Camera and microphone permissions dialog
–use-fake-device-for-media-stream: use Chrome’s fake media streams; video looks like this and audio is a boop boop boop
Chrome’s “fake” video media stream
optional: pass in your own custom media like a cool kid.
–use-file-for-fake-video-capture=video. y4m
The accepted video format is a bit arcane, so feel free to use the daily. y4m file found here if your ffmpeg skills are a bit rusty. Any file will do for audio. If these are not passed in, Chrome’s green pac-man and boop boop audio are used.
Head(less) in the cloud(s)
For extra credit you can dream big and build off of the above by creating an AMI that runs Chromium on EC2 instances. This can be achieved, step by step, like so
Launch EC2 instance (perhaps an Amazon Linux 2 AMI-flavored instance)
Install Chrome (for example, via curl | bash)
Create AMI from EC2
Launch instances with:
aws ec2 run-instances \
–region us-east-2\
–image-id ami-SHA \
–count 25 \
–instance-type ” \
–user-data ‘#! /bin/shgoogle-chrome-stable \
–headless \
–no-sandbox \
–disable-gpu \
–disable-sync \
–no-first-run \
–remote-debugging-port=9222 \
–remote-debugging-address=$(ifconfig | head -n2 | tail -n1 | awk ‘”‘”‘{ print $2}'”‘”‘) \
–autoplay-policy=no-user-gesture-required \
–use-fake-ui-for-media-stream \
–use-fake-device-for-media-stream \
–user-data-dir=/tmp/chrome ”
But I want to use Node
Fret not, here is the gist of the basic chromedriver-using-selenium-webdriver-npm-module setup. This includes the custom fake media video file and requires the ChromeDriver program, which can be found here.
Gotcha 1: Chrome and ChromeDriver’s versions must match. You’ll see the error SessionNotCreatedError: session not created: This version of ChromeDriver only supports Chrome version __ otherwise.
Protip 1: Check your version of Chrome via chromeversion and download the corresponding driver.
Gotcha 2: “Help! I spun up a bunch of headless robots and they have taken over my machine. ” It is true that browser instances without heads are a bit more difficult to catch and kill.
Protip 2: This one liner will find and kill all WebRTC-friendly ChromeDriver processes and return CPU usage to its rightful owner: the GUI Chrome instance.
kill -9 $(ps | grep -i ‘use-fake-ui-for-media-stream’ | awk ‘{print $1}’)
Daily vs RobotIn conclusion
There are many ways to automate all the things(™) and these are just a few ways to start. After things run smoothly with chromedriver, try leveling up by using geckodriver and safaridriver with selenium-webdriver. Extra points are awarded to those who run multiple different WebDrivers at the same time.
Happy Roboting!
Creating a webRTC peer *without* a browser, with just a ...

Creating a webRTC peer *without* a browser, with just a …

I want to create a WebRTC peer that’s a simple listener/recorder with no “presentation” component (i. e., no HTML/CSS).
If this is possible, (with the WebRTC JavaScript APIs), please tell me what standalone JavaScript engine I can use (I’m thinking of installing a standalone V8 engine).
Thank you.
asked May 7 ’13 at 22:49
10
Very late answer, but I think it’s good to re-evaluate this question, because a lot has changed since this question was asked.
I assume this question was asked because there was no native support for webrtc yet at the time. But there is now. Android, iOS, Windows, Linux and OSX all support native webrtc libraries now.
The native libraries can be used to create a peerconnection and setup a stream to another client (cross-platform). If you want to create any webrtc-based client application without using a browser, the native libraries are the way to go. No silly standalone javascript engine necessary.
Read more here
answered May 1 ’15 at 13:04
KevinKevin2, 64729 silver badges56 bronze badges
2
I think you could use a server to do so. There’s a npm package bringing webrtc capabilites to nodejs: node-webrtc.
Pierre F1, 25212 silver badges28 bronze badges
answered Jan 16 ’14 at 13:00
HugoHugo8471 gold badge9 silver badges30 bronze badges
You could do this with headless chrome. Chrome of course has full WebRTC support, but can be started in “headless” mode, and then interacted with via the command line or their control interface.
answered Apr 2 ’19 at 15:16
Eric HansonEric Hanson5071 gold badge4 silver badges9 bronze badges
1
The best way to do this right now is to create a node-webkit application. The unified node + browser context gives you the best of all worlds.
answered May 27 ’14 at 14:45
ZECTBynmoZECTBynmo2, 8982 gold badges21 silver badges35 bronze badges
I wanted to have a permanently running server-side “Robot” where public peers could connect to and test their connection (peer-to-peer vs relay). I was successful with the headless browser Puppeteer. The “Robot” uses basically the same code as the public peers. It runs on Windows and Unix and connected to the signaling and STUN/TURN server and the individual peer without any code changes.
answered Oct 22 ’18 at 6:32
TsunamisTsunamis4, 9521 gold badge17 silver badges21 bronze badges
If I got you right that you want to make WebRTC – aka primarily browser targeted feature to be used without browser:-)
I could imagine that “emulating” the browser behaviour can be done simply by implementing the necessary api via your own code, either directly inside the rhino or similar or by actually controlling the interface that handles the media streams in native code.
Thus what has to be done is implement the WebRTC api which controls capturing the A/V from input devices and sending it to the other side. As I understood it shall be no UI node, like embedded ethernet camera with mic that servers as capture A/V in conference room.
I am affraid that it could be a piece of work as the main part is the media a connection handling.
answered Dec 27 ’13 at 15:53
pxlinuxpxlinux191 silver badge2 bronze badges
Not the answer you’re looking for? Browse other questions tagged javascript webrtc or ask your own question.
Using a headless browser for WebRTC load tests - Habr

Using a headless browser for WebRTC load tests – Habr

In the previous article we went over a load test whose data could be used to choose a load-appropriate server. In the course of the testing, we would publish a stream on one WCS, and we would pick up that stream several times using a second WCS. The acquired results could be used as a basis for decisions on server operability. Some would (justly) have concerns regarding the possible biases in such a test — after all, one of our servers was used to test another one of our servers. Could it be that we were using a specially optimized code that skewed the results in our favor? It is true that the end user will not use a second server to watch streams. The end user will watch them in browser. This is why, seemingly the simplest and most logical way would involve manual testing: open a browser, open a tab with the player, specify the stream name and click “Play;” and then repeat 1000 times. All that’s left is to find a guinea pig a volunteer tester and a PC that could handle 1000 tabs with video. Alternatively, and more realistically, the same test could involve a group of people and multiple PCs. Or we could just use this article we are going to discuss another testing method, one that uses a headless browser. We will compare the test results with the results from the stream capture-based testing. You can’t leave your head onНeadless-browser is something without a head. In the context of the frontend, it is an indispensable developer tool with which you can test code, check the quality and consistency of the layout, programmatically create scenarios of user interaction with the site, followed by fixing the results of these scenarios for use in tests. Headless Chrome) is a full-featured browser without a graphical interface, which means it draws everything in memory. Headless Chrome is faster and uses less memory than a regular browser. The last statement seems like a contradiction. But the reduction in memory usage is achieved by the absence of a graphical component. The headless browser has no real content rendering, which means that it does not need to render illustrations weighing several gigabytes, with which modern websites are often flooded. At the same time, the headless browser will download all the content from the web page, just like a regular browser. Headless Chrome is software-driven using an API and can be installed on “pure” Linux. You just need to install the package, and the browser will work out of the box, just like its brother with a head – Google testing using Headless Chrome will be closest to reality because it simulates real users connecting to WCS using a here we load test in a headless browserServer under testWe will install on the server with the following characteristics2x Intel(R) Xeon(R) Silver 4214 CPU @ 2. 20GHz (a total of 24 cores, 48 streams);192GB RAM;2x 10Gbpsa WCS and perform work to prepare for operation in production. Read more in the article and in documentation. In the standard Two-way Streaming example, we published the stream from the virtual camera titled “stream1”. That’s all. Now leave the server to wait for the load test. Testing serverA server with Ubuntu 20. 04 OS was used in testing Characteristics: 2x Intel(R) Xeon(R) Silver 4214 CPU @ 2. 20GHz (a total of 24 cores, 48 streams);192GB RAM;2x 10GbpsSetup and testing process: 1. Install Xvfb, Xorg for working with a virtual output device:apt-get install xvfb -y
apt-get install x11-xkb-utils xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic xserver-xorg-core xserver-xorg-video-dummy alsa-base -y2. Configure the virtual monitor – edit the configuration file /usr/share/X11/ “Device”
Identifier “Configured Video Device”
Driver “dummy”
Option “ConstantDPI” “true”
VideoRam 192000
EndSection
Section “Monitor”
Identifier “Configured Monitor”
HorizSync 31. 5-48. 5
VertRefresh 50-70
Modeline “1600×1200” 22. 04 1600 1632 1712 1744 1200 1229 1231 1261
Modeline “1600×900” 33. 92 1600 1632 1760 1792 900 921 924 946
Modeline “1440×900” 30. 66 1440 1472 1584 1616 900 921 924 946
ModeLine “1366×768” 72. 00 1366 1414 1446 1494 768 771 777 803
Modeline “1280×1024” 31. 50 1280 1312 1424 1456 1024 1048 1052 1076
Modeline “1280×800” 24. 15 1280 1312 1400 1432 800 819 822 841
Modeline “1280×768” 23. 11 1280 1312 1392 1424 768 786 789 807
Modeline “1360×768” 24. 49 1360 1392 1480 1512 768 786 789 807
Modeline “1024×768” 18. 71 1024 1056 1120 1152 768 786 789 807
Modeline “768×1024” 19. 50 768 800 872 904 1024 1048 1052 1076
Section “Screen”
Identifier “Default Screen”
Monitor “Configured Monitor”
Device “Configured Video Device”
DefaultDepth 24
SubSection “Display”
Depth 24
Modes “1600×1200” “1680×1050” “1600×900” “1400×1050” “1440×900” “1280×1024” “1366×768” “1280×800” “1024×768”
EndSubSection
EndSection3. Install Headless Chromewget -q -O – | sudo apt-key add –
echo “deb [arch=amd64] stable main” > /etc/apt/
apt-get update
apt-get install google-chrome-stable4. Download and unpack the load testing scripts. An archive with load testing scripts is available for downloading tar -xvzf Go to the directory in which the archive was extractedcd xloadThe directory contains:player with Web-player scripts – script for running Xorg – script for running testsThe following parameters can be passed to the script:required:​url – URL of the player page with the necessary parameters ​or ​urlsfile – the path to the file containing several such URLs, which will be traversed in orderoptional:stressrate – interval of adding a new subscriber in milliseconds;ttl – subscriber lifetime in seconds;maxsubscribers – maximum number of subscribers. 6. Run testing. / -url -maxsubscribers 100 -stressrate 500 -ttl 600The screenshot below shows the test result: Unfortunately, because of the heavy load on the testing server it was not possible to connect to the test 1000 subscribers. Chrome, although headless, but took up all available CPU and RAM resources, with only 50 subscribers, as you can see in the CPU Load Averrage and memory consumption graphs. A long-lasting CPU Load Averrage value of more than 100 points indicates a high load on the processor. The RAM of the testing server also turned out to be occupied by more than 50%. And if the RAM resource was still available, the CPU load did not allow to reach the planned number of spectators. In this test it was possible to connect a little more than 50 subscribers. In this case, there is no heavy load on the WCS server. CPU Load Averrage value does not exceed 1 point. Pauses in the operation of Z Garbage Collector did not exceed 3. 5 milliseconds. There were no degraded streams. WebRTC load test “from WCS to WCS”The testing methodology is discussed in detail in the article and in documentationWe will use two servers with the following specifications to test using stream capture:2x Intel(R) Xeon(R) Silver 4214 CPU @ 2. 20GHz (a total of 24 cores, 48 streams);192GB RAM;2x 10GbpsLet’s launch load testing using the Console web app:1. On the first server, open the Console app via HTTP. Specify the domain name or IP address of the first server and click “Add node. ” This will be the server under test, which will be the source of streams. Then similarly connect the second server, which will simulate subscribers and capture streams. 3. For the first server, run the standard Two-way Streaming example and publish the stream from the web camera. The stream name can be anything. 4. In the Console app, select the second server, click ‘Pull streams’ button, and set the test parameters:Choose node — choose the first server;Local stream name — specify the name of the stream on the testing server in which the stream from the one being tested will be captured. (index corresponding to the number of the captured stream will be added to the stream name)Remote stream name — specify the name of the stream published on the server under test;Qty — specify the number of spectators (for our test — 1000). 5. Then press the “Pull” button to start the test: The test result is on the screenshot below: When testing using stream capturing, we managed to reach the planned number of viewers (1000 subscribers). The graphs show the increased load on the WCS server being tested. The test using the second WCS server cannot be considered completely independent. We test our server using the second server of the same type. And, as it was already mentioned above, this variant is very rare in practice. Despite the fact that we were not able to reach the estimated load of 1000 viewers when testing with Headless Chrome, this test clearly shows that with a small number of real subscribers there is no need to use powerful hardware to host WCS, and you can save some money when buying or renting a server. Happy streaming! Links

Frequently Asked Questions about chrome headless webrtc

Leave a Reply