logo
CommunityResearch Program

Resources

BlogForum
Back to blog
Building a video conferencing app with Agora and React

September 07, 2023

Building a video conferencing app with Agora and React
byEkaansh ArorainTips

Developing real-time engagement applications where users interact with each other using live audio, video, and text is a really complex challenge. It requires a lot of time and effort to build out the infrastructure and logic to support these features. The biggest challenge is to have your infrastructure be reliable, scalable, and low latency to deliver the best user experience.

At Agora, we’re solving this problem for developers at scale. Agora’s Software-Defined Real-Time Network™ provides the broadest range of coverage throughout the world (200+ countries), while delivering high-quality with ultra-low latency (400ms or less). To make leveraging the Agora platform easy for developers we offer easy to use SDKs for Android, iOS/macOS, Windows, Web, Electron, Flutter, React Native, Unity and more. With our SDKs you can build and deploy your real-time engagement application in a matter of hours instead of days.

Getting back to the topic for this blog, how does one build a video conferencing app with Agora and React? Agora recently announced a new beta SDK for React. We’ll look at how it works with a simple demo app.

Getting Started

Creating an Account with Agora

  • Sign up for an account and log in to the Agora Console.
  • Navigate to the Project List tab under the Project Management tab
  • Create a project by clicking the blue “Create” button.
  • When prompted to use App ID + Certificate, select App ID only.
  • Retrieve the App ID, which will be used to authorize your requests while you’re developing the application.

Note: This blog does not implement token authentication, which is recommended for all RTE apps running in production environments. For more information about token-based authentication in the Agora platform, see this guide.

Setting up a React Project

The source for this project is available on GitHub, you can also try out a live demo.
To follow along, scaffold a React project using Vite:

  1. Ensure that you have installed Node.js LTS and NPM.
  2. Open a terminal and execute npm create vite@latest agora-videocall — — template react-ts
  3. This creates a folder named “agora-videocall”
  4. Navigate to the project: cd agora-videocall
  5. Install the dependencies: npm i agora-rtc-react agora-rtc-sdk-ng
  6. You can start a dev server by running npm run dev

Time to Code

We’ll start in the App.tsx file. Since this demo is going to be really simple, we’ll create all our components in the same file. Let’s start by importing the dependencies we’ll use in our application.

import { useState } from "react";
import { AgoraRTCProvider, useJoin, useLocalCameraTrack, useLocalMicrophoneTrack, usePublish, useRTCClient, useRemoteAudioTracks, useRemoteUsers, RemoteUser, LocalVideoTrack } from "agora-rtc-react";
import AgoraRTC from "agora-rtc-sdk-ng";
import "./App.css";

The Agora React SDK provides a set of hooks and components to manage the state of your application and to render the video call interface.
In our App, let’s initialize a client object from the Agora SDK and pass it to the useRTCClient hook. The client object represents the local user in the video call. Passing the object to the useRTCClient hook makes it available to the rest of the application (and hooks) by using a React Provider. We’ll add this in a bit, first, let’s set up our application state:

const App = () => {
  const client = useRTCClient(AgoraRTC.createClient({ codec: "vp8", mode: "rtc" }));
  const [channelName, setChannelName] = useState("test");
  const [AppID, setAppID] = useState("");
  const [token, setToken] = useState(null);
  const [inCall, setInCall] = useState(false);
  • channelName: Represents the name of the channel where users can join to chat with one another. Let’s call our channel “test”.
  • AppID: Holds the Agora App ID that we obtained before from the Agora Console. Replace the empty string with your App ID.
  • token: If you’re using tokens, you can provide one here. But for this demo we’ll just set it as null.
  • inCall: A Boolean state variable to track whether the user is currently in a video call or not.

Next, we display the App component. In the return block, we’ll render an h1 element to display our heading. Now, based on the inCall state variable, we’ll display either a Form component to get details (App ID, channel name, and token) from the user or display the video call:

return (
    <div style={styles.container}>
      <h1>Agora React Videocall</h1>
      {!inCall ? (
        <Form
          AppID={AppID}
          setAppID={setAppID}
          channelName={channelName}
          setChannelName={setChannelName}
          token={token}
          setToken={setToken}
          setInCall={setInCall}
        />
      ) : (
        {/* Videocall here */}
      )}
    </div>
  );
};

To create the video call component, let’s first wrap it with the AgoraRTCProvider component, this accepts a client returned from the useRTCClient hook and makes it accessible down the tree. You should add this at the top level of your video call.
We’ll create a <Videos> component next, to hold the users’ videos, passing it our props from before. We’ll also display an End Call button that ends the call by setting the inCall state to false:

return (
    <div style={styles.container}>
      <h1>Agora React Videocall</h1>
      {!inCall ? (
        <Form
          AppID={AppID}
          setAppID={setAppID}
          channelName={channelName}
          setChannelName={setChannelName}
          token={token}
          setToken={setToken}
          setInCall={setInCall}
        />
      ) : (
        <AgoraRTCProvider client={client}>
          <Videos channelName={channelName} AppID={AppID} token={token} />
          <button onClick={() => setInCall(false)}>End Call</button>
        </AgoraRTCProvider>
      )}
    </div>
  );
}

export default App;

Video Component

We destructure the props to access the AppID, channelName and token.
The Agora React SDK also gives you useLocalMicrophoneTrack and useLocalCameraTrack hooks, these create and set up the local microphone and camera tracks respectively. Since the process to create these tracks is asynchronous they also give you a loading and an error state along with the tracks.

function Videos(props: { channelName: string; AppID: string; token: string }) {
  const { AppID, channelName, token } = props;
  const { isLoading: isLoadingMic, localMicrophoneTrack } = useLocalMicrophoneTrack();
  const { isLoading: isLoadingCam, localCameraTrack } = useLocalCameraTrack();

We can use the useRemoteUsers hook to access the other (remote) users that join our video call. This hook gives you an array of objects, each object represents remote users in the call. The array is like your React state that gets updated each time someone joins or leaves the channel, we’ll use this to render our UI and keep it in sync with the form of the call:

 const remoteUsers = useRemoteUsers();

We can use the usePublish hook to publish the local microphone and camera tracks. You can pass in an array of tracks you want to publish to the channel, these tracks can be subscribed and viewed by other users who join the same channel.

 usePublish([localMicrophoneTrack, localCameraTrack]);

To start the call we need to join a room or a channel. We can do that by calling the useJoin hook and passing in the AppID, channelName, and token as props.

useJoin({
    appid: AppID,
    channel: channelName,
    token: token === "" ? null : token,
  });

We can access the remote users’ audio tracks with the useRemoteAudioTracks hook by providing it the remoteUsers array. This hook automatically handles subscribing and unsubscribing to the user tracks as your component is mounted and tracks are available.

const { audioTracks } = useRemoteAudioTracks(remoteUsers);

To listen to the remote users’ tracks, we can map over the audioTracks array and call the play method for each available track:

 audioTracks.map((track) => track.play());

We’ll check if either the microphone or the camera is still loading and render a simple loading message:

const deviceLoading = isLoadingMic || isLoadingCam;
  if (deviceLoading) return <div style={styles.grid}>Loading devices...</div>;

Once the tracks are ready, we can render a grid with videos of all the users in the channel. We can render the user’s own (local) video track using the LocalVideoTrack component from the SDK, passing it the localCameraTrack as the track prop:

return (
    <div style={{ ...styles.grid, ...returnGrid(remoteUsers) }}>
      <LocalVideoTrack track={localCameraTrack} play={true} style={styles.gridCell} />
      {/* Remote videos here */}
    </div>
  );
}

We can display the remote users’ video tracks using the RemoteUser component. We’ll iterate through the remoteUsers array, passing each user as a prop to it:

return (
    <div style={{ ...styles.grid, ...returnGrid(remoteUsers) }}>
      <LocalVideoTrack track={localCameraTrack} play={true} style={styles.gridCell} />
      {remoteUsers.map((user) => (
        <RemoteUser user={user} style={styles.gridCell} />
      ))}
    </div>
  );
}

These components are unopinionated so you can style them as you like.

That’s all the code we need to build a video conferencing app with Agora and React. Here’s what the final code looks like:

function Videos(props: { channelName: string; AppID: string; token: string }) {
  const { AppID, channelName, token } = props;
  const { isLoading: isLoadingMic, localMicrophoneTrack } = useLocalMicrophoneTrack();
  const { isLoading: isLoadingCam, localCameraTrack } = useLocalCameraTrack();
  const remoteUsers = useRemoteUsers();
  const { audioTracks } = useRemoteAudioTracks(remoteUsers);

  usePublish([localMicrophoneTrack, localCameraTrack]);
  useJoin({
    appid: AppID,
    channel: channelName,
    token: token === "" ? null : token,
  });

  audioTracks.map((track) => track.play());
  const deviceLoading = isLoadingMic || isLoadingCam;
  if (deviceLoading) return <div style={styles.grid}>Loading devices...</div>;

  return (
    <div style={{ ...styles.grid, ...returnGrid(remoteUsers) }}>
      <LocalVideoTrack track={localCameraTrack} play={true} style={styles.gridCell} />
      {remoteUsers.map((user) => (
        <RemoteUser user={user} style={styles.gridCell} />
      ))}
    </div>
  );
}

Form and styling

For the sake of completeness, here’s what the Form component looks like:

function Form(props) {
  const { AppID, setAppID, channelName, setChannelName, token, setToken, setInCall } = props;
  return (
    <div>
      <p>Please enter your Agora AppID and Channel Name</p>
      <label htmlFor="appid">Agora App ID: </label>
      <input id="appid" type="text" value={AppID} onChange={(e) => setAppID(e.target.value)} placeholder="required"/>
      <br /><br />
      <label htmlFor="channel">Channel Name: </label>
      <input id="channel" type="text" value={channelName} onChange={(e) => setChannelName(e.target.value)} placeholder="required" />
      <br /><br />
      <label htmlFor="token">Channel Token: </label>
      <input id="token" type="text" value={token} onChange={(e) => setToken(e.target.value)} placeholder="optional" />
      <br /><br />
      <button onClick={() => AppID && channelName ? setInCall(true) : alert("Please enter Agora App ID and Channel Name")}>
        Join
      </button>
    </div>
  );
}

Conclusion

That’s all it takes to put together a high-quality video conferencing app with the Agora React SDK. We’ve barely scratched the surface in terms of what’s possible. You can add a ton of features like virtual backgrounds, selective subscriptions, waiting rooms and so on. Learn more by visiting the docs and our API reference.

We’re looking for feedback on how we can improve the SDK in this beta period. Please contribute by opening issues (and submitting PRs) on our GitHub repo.


Recent Posts

April 25, 2024

The Role of Natural Language Processing (NLP) in AI-Powered Solutions

See post

April 25, 2024

Exploring the Potential of Blockchain Technology in Various Industries

See post

7 Software Engineering Disciplines_

April 19, 2024

7 Software Engineering Disciplines: Which Career Path Should You Choose?

See post

Contact us

Swan Buildings (1st floor)20 Swan StreetManchester, M4 5JW+441612400603community@developernation.net
HomeCommunityResearch ProgramBlog

Resources

Knowledge HubPulse ReportReportsForumEventsPodcast
Code of Conduct
SlashData © Copyright 2024 |All rights reserved