Dot-Net-Core

WebRTC 和 Asp.NetCore

  • May 8, 2018

我想將音頻流從我的 Angular Web App 錄製到我的 Asp.net Core Api。

我認為,使用 SignalR 及其 websockets 是一個很好的方法。

使用這個打字稿程式碼,我可以得到一個 MediaStream:

import { HubConnection } from '@aspnet/signalr';

[...]

private stream: MediaStream;
private connection: webkitRTCPeerConnection;
@ViewChild('video') video;

[...]

navigator.mediaDevices.getUserMedia({ audio: true })
 .then(stream => {
   console.trace('Received local stream');
   this.video.srcObject = stream;
   this.stream = stream;

   var _hubConnection = new HubConnection('[MY_API_URL]/webrtc');
   this._hubConnection.send("SendStream", stream);
 })
 .catch(function (e) {
   console.error('getUserMedia() error: ' + e.message);
 });

我用 .NetCore API 處理流

 public class MyHub: Hub{
   public void SendStream(object o)
   {
   }
}

但是當我將 o 轉換為 System.IO.Stream 時,我得到了一個空值。

在閱讀 WebRTC 的文件時,我看到了有關 RTCPeerConnection 的資訊。IceConnection …我需要那個嗎?

如何使用 SignalR 將音頻從 WebClient 流式傳輸到 Asp.netCore API?文件?GitHub?

謝謝你的幫助

我找到了訪問麥克風流並將其傳輸到伺服器的方法,這裡是程式碼:

 private audioCtx: AudioContext;
 private stream: MediaStream;

 convertFloat32ToInt16(buffer:Float32Array) {
   let l = buffer.length;
   let buf = new Int16Array(l);
   while (l--) {
     buf[l] = Math.min(1, buffer[l]) * 0x7FFF;
   }
   return buf.buffer;
 }

 startRecording() {
   navigator.mediaDevices.getUserMedia({ audio: true })
     .then(stream => {
       this.audioCtx = new AudioContext();
       this.audioCtx.createMediaStreamSource(stream);
       this.audioCtx.onstatechange = (state) => { console.log(state); }

       var scriptNode = this.audioCtx.createScriptProcessor(4096, 1, 1);
       scriptNode.onaudioprocess = (audioProcessingEvent) => {
         var buffer = [];
         // The input buffer is the song we loaded earlier
         var inputBuffer = audioProcessingEvent.inputBuffer;
         // Loop through the output channels (in this case there is only one)
         for (var channel = 0; channel < inputBuffer.numberOfChannels; channel++) {

           console.log("inputBuffer:" + audioProcessingEvent.inputBuffer.getChannelData(channel));
           var chunk = audioProcessingEvent.inputBuffer.getChannelData(channel);
           //because  endianness does matter
           this.MySignalRService.send("SendStream", this.convertFloat32ToInt16(chunk));
         }
       }
       var source = this.audioCtx.createMediaStreamSource(stream);
       source.connect(scriptNode);
       scriptNode.connect(this.audioCtx.destination);


       this.stream = stream;
     })
     .catch(function (e) {
       console.error('getUserMedia() error: ' + e.message);
     });
 }

 stopRecording() {
   try {
     let stream = this.stream;
     stream.getAudioTracks().forEach(track => track.stop());
     stream.getVideoTracks().forEach(track => track.stop());
     this.audioCtx.close();
   }
   catch (error) {
     console.error('stopRecording() error: ' + error);
   }
 }

下一步是將我的 int32Array 轉換為 wav 文件。

對我有幫助的來源:

注意:我沒有添加關於如何配置 SignalR 的程式碼,這不是這裡的目的。

引用自:https://stackoverflow.com/questions/50220281