Building a Video chat application with Remix and 100ms
Deepankar Bhade /
We will be building a Video chat application with Remix the hottest framework at the moment and 100ms React sdk. This would be detailed guide going from setting up the project to deploying it on ▲ Vercel so stay tuned.
What is 100ms first of all? 100ms is a cloud platform that allows developers to add live video and audio conferencing to Web, Android and iOS applications. We will be using it's polished react sdk in this project.
Let's start our project setup, we will run the following command and make sure to choose the Vercel template since we will be deploying on it.
npx create-remix@latest # choose vercel as deployment target
Now let's setup things on the 100ms side! It's very straight forward, go to 100ms dashboard create an account it will ask you to setup an app carry on the steps. You would see an app being deployed. You can also follow this guide if you're stuck somewhere.
Now let's install 100ms React sdk & icons to our project.
## npm npm install --save @100mslive/react-sdk@latest ## yarn yarn add @100mslive/react-sdk@latest @100mslive/react-icons@latest
Let us start with initializing the library. We need to wrap the entire application with <HMSRoomProvider />
component this let's us use the hooks for state and actions.
import { Links, LiveReload, Meta, Outlet, Scripts, ScrollRestoration, } from 'remix'; import type { MetaFunction } from 'remix'; import { HMSRoomProvider } from '@100mslive/react-sdk'; export const meta: MetaFunction = () => { return { title: 'Remix Video Chat' }; }; export default function App() { return ( <html lang='en'> <head> <meta charSet='utf-8' /> <meta name='viewport' content='width=device-width,initial-scale=1' /> <Meta /> <Links /> </head> <body><HMSRoomProvider><Outlet /><ScrollRestoration /><Scripts />{process.env.NODE_ENV === 'development' && <LiveReload />}</HMSRoomProvider></body> </html> ); }
useHMSStore
will give you the complete state of the application and useHMSActions
will help us perform actions such as joining the room, muting our audio/video, and sending messages.
Setting up envWe will need to generate an Auth Token to join a Room, you can get your Token endpoint from Developer section in 100ms dashboard.
Follow this guide to setup of environment variables in remix.
Now create a
.env
file and add your token endpoint there.HMS_TOKEN_ENDPOINT=<YOUR-TOKEN-ENDPOINT>That's it.
Flow of the app
To generate the Auth token we need to 2 things the room_id
and the role
name. We will get these params via the url. We will be using API routes and Data loading two of the most powerful features of remix to accomplish this.
If a person visits the url /meeting/:roomId/:role
we can extract those params and generate the token. How do we do this in remix? We wil be defining a route in our remix config file. So now if someone visits /meeting/*
we will render the <Meeting />
component.
/** * @type {import('@remix-run/dev/config').AppConfig} */ module.exports = { appDirectory: 'app', assetsBuildDirectory: 'public/build', publicPath: '/build/', serverBuildDirectory: 'api/_build', ignoredRouteFiles: ['.*'],routes(defineRoutes) {return defineRoutes((route) => {route('/meeting/*', 'meeting.tsx');});},};
We will now create some files:
/app/meeting.tsx
-> generates token, renders Live component/app/components/Live.tsx
-> renders Join or Room component/app/components/Join.tsx
-> will contain a form to join room/app/components/Live.tsx
-> live audio/video chat here
Generate Auth token
To generate the Auth token we will be making a fetch
call to the endpoint pass roomId
and role
get the token and also handle some errors.
Each route module in remix can export a component and a loader
. useLoaderData
will provide the loader's data to your component. Inside this loader
function we will call the fetch api.
Copy the CSS code inside global.css file
from here .
import { useLoaderData } from 'remix'; import type { LoaderFunction } from 'remix'; import styles from '~/styles/global.css'; import Live from '~/components/Live'; interface ResponseType { error: null | string; token: null | string; } export const links = () => { return [{ rel: 'stylesheet', href: styles }]; }; export const loader: LoaderFunction = async ({ params }: any) => { const endPoint = process.env.HMS_TOKEN_ENDPOINT; const data: ResponseType = { token: null, error: null, }; const slug = params['*']; const url = slug?.split('/'); if (url?.length === 2) { try { const response = await fetch(`${endPoint}api/token`, { method: 'POST', body: JSON.stringify({ room_id: url[0], role: url[1], }), }); if (!response.ok) { let error = new Error('Request failed!'); throw error; } const { token } = await response.json(); data['token'] = token; } catch (error) { data['error'] = 'Make sure the RoomId exists in 100ms dashboard'; } } else { data['error'] = 'Join via /:roomId/:role format'; } return data; }; export default function MeetingSlug() { const { token, error } = useLoaderData<ResponseType>(); return ( <div> {!(token || error) ? <h1>Loading...</h1> : null} {token ? <Live token={token} /> : null} {error ? ( <div className='error'> <h1>Error</h1> <p>{error}</p> <p> Get RoomId from{' '} <a href='https://dashboard.100ms.live/rooms'>here</a> and join with the role created in it :) </p> </div> ) : null} </div> ); }
We are handling errors here and also showing some helpful error messages. Upon successful token generation we will pass it on to the <Live />
component.
Now if the person has joined the room we will show the Join form i.e. <Join />
component and if joined we will render <Room />
component. But how do we know if the person has joined or not?
We can use helpful selector functions to fetch data from the 100ms store. Selector functions would fetch you information from the state at any point in time, it can be anything ranging from "how many people are in the room?" to "is my audio on or not?". The answer to all these questions is the store.
We can know if the person has joined the room with the help of selectIsConnectedToRoom
selector function. We will also further pass the token to <Join/>
component.
import { selectIsConnectedToRoom, useHMSStore } from '@100mslive/react-sdk'; import React from 'react'; import Join from '~/components/Join'; import Room from '~/components/Room'; const Live: React.FC<{ token: string }> = ({ token }) => { const isConnected = useHMSStore(selectIsConnectedToRoom); return <div>{isConnected ? <Room /> : <Join token={token} />}</div>; }; export default Live;
Now if you start the server and go to /meeting/:roomId/:role
you should be able to see this Join component because we haven't joined the room.
NoteTo get your roomId visit Rooms section And make sure to use the role that is created in the roomId.
Now let's work on creating the form. To join a room we need to call the join()
function from the useHMSActions
. It needs userName
which we will get from the input and authToken from the prop.
import { useHMSActions } from '@100mslive/react-sdk'; import React, { useState } from 'react'; const Join: React.FC<{ token: string }> = ({ token }) => { const actions = useHMSActions(); const [name, setName] = useState(''); const joinRoom = () => { actions.join({ authToken: token, userName: name, }); }; return ( <form onSubmit={(e) => { e.preventDefault(); joinRoom(); }} > <h1>Join Room</h1> <input value={name} onChange={(e) => setName(e.target.value)} required type='text' placeholder='Enter Name' maxLength={20} minLength={2} /> <button type='submit'>Join</button> </form> ); }; export default Join;
Now if fill the form and submit you should be seeing the <Room />
component being rendered. You won't see anything yet because we haven't added anything so let's do it.
For the <Room />
component we will create the following components:
/app/components/Header.tsx
-> header/app/components/Conference.tsx
-> Live Audio/Video here/app/components/Footer.tsx
-> will have audio/video controls and leave button
import Conference from './Conference'; import Footer from './Footer'; import Header from './Header'; const Room = () => { return ( <div> <Header /> <Conference /> <Footer /> </div> ); }; export default Room;
Now how do we know "Who all are in my room?" for that we can use selectPeers
selector functions for this. This will give us an array of peers (people in the room).
All we gotta do is Map over this array and render a <Peer />
component. This will show the video of the person. We will create the component in the same file.
import React from 'react'; import { HMSPeer, selectPeers, useHMSStore, } from '@100mslive/react-sdk'; const Conference = () => { const peers = useHMSStore(selectPeers); return ( <main> {peers.map((peer) => ( <Peer key={peer.id} peer={peer} /> ))} </main> ); }; const Peer: React.FC<{ peer: HMSPeer }> = ({ peer }) => { return ( <div className='tile'> {/* Render video here */} </div> ); }; export default Conference;
Rendering Video
To render the video, we need to call attachVideo
method of useHMSActions
, which accepts a trackId
and a DOM element.
But we have abstracted this implementation inside useVideo
hook for ease. This hook will return a ref
given a video trackId
. The returned ref
can be used to set on a video element meant to display the video. The hook will take care of attaching and detaching video, and will automatically detach when the video goes out of view to save on bandwidth.
... const Peer: React.FC<{ peer: HMSPeer }> = ({ peer }) => { return ( <div className='tile'><Video mirror={peer.isLocal} videoTrack={peer.videoTrack} /></div> ); };const Video = ({ videoTrack, mirror }: any) => {const { videoRef } = useVideo({trackId: videoTrack,});return (<video className={mirror ? 'mirror' : ''} ref={ref} autoPlay muted playsInline />);};
Now join the room, you would be asked for permission to give camera access click on "Allow" and Voila! You can see yourself.
Muting/Unmuting
Right now we are publishing both audio and video feed of the user whenever they join the room. We may want to allow the user to mute/unmute their own tracks - both audio and video.
If you specifically need granular data like knowing the current video status you can use selectIsLocalVideoEnabled
and for audio selectIsLocalAudioEnabled
instead.
In this case, we can use useAVToggle
hook which will give us the current audio/video status of the user and also give us functions to toggle them.
import { useAVToggle, useHMSActions } from '@100mslive/react-sdk'; import { MicOffIcon, MicOnIcon, VideoOffIcon, VideoOnIcon, HangUpIcon, } from '@100mslive/react-icons'; function Footer() { const { isLocalAudioEnabled, isLocalVideoEnabled, toggleAudio, toggleVideo, } = useAVToggle(); const actions = useHMSActions(); return ( <footer> <button onClick={toggleAudio}> {isLocalAudioEnabled ? <MicOnIcon /> : <MicOffIcon />} </button> <button onClick={toggleVideo}> {isLocalVideoEnabled ? <VideoOnIcon /> : <VideoOffIcon />} </button> <button onClick={() => actions.leave()}> <HangUpIcon /> </button> </footer> ); } export default Footer;
Now you should be able to toggle audio/video and leave room. But how will the other person know if my audio/video is off? For that we need to show the status on the video tile.
We will get user's current audio/video status via selectIsPeerAudioEnabled
and selectIsPeerVideoEnabled
these selector function need peerId
as argument. We will show user's avatar when camera is off, show audio status and the user's name. Let's refactor our
<Peer />
component.
Copy the code for <Avatar />
component from here .
import React from 'react';import { HMSPeer, selectIsPeerAudioEnabled, selectIsPeerVideoEnabled, selectPeers, useHMSStore, useVideo, } from '@100mslive/react-sdk';import Avatar from './Avatar';import { MicOffIcon, MicOnIcon } from '@100mslive/react-icons';const Conference = () => { const peers = useHMSStore(selectPeers); return ( <main> {peers.map((peer) => ( <Peer key={peer.id} peer={peer} /> ))} </main> ); }; const Peer: React.FC<{ peer: HMSPeer }> = ({ peer }) => {const isAudioOn = useHMSStore(selectIsPeerAudioEnabled(peer.id));const isVideoOn = useHMSStore(selectIsPeerVideoEnabled(peer.id));return ( <div className='tile'>{!isVideoOn ? <Avatar name={peer.name} /> : null}<span className='name'>{peer.name}</span><Video mirror={peer.isLocal} videoTrack={peer.videoTrack} /><span className='audio'>{!isAudioOn ? <MicOffIcon /> : <MicOnIcon />}</span></div> ); }; const Video = ({ videoTrack, mirror }: any) => { const { videoRef } = useVideo({ trackId: videoTrack, }); return ( <video className={mirror ? 'mirror' : ''} ref={ref} autoPlay muted playsInline /> ); }; export default Conference;
And that's it. Isn't it amazing how we got our entire application done in minimal, easy to understand code?
Deploy on Vercel
If you want to directly deploy the app just click the button below and add token endpoint and that's it.
You can find the code for this project here .
More
If you're interested in adding more features then refer to our docs . Here's some links
Thank you have a great day. Feel free to text me on twitter if you have anty questions around this.