ionorg / ion-avp Goto Github PK
View Code? Open in Web Editor NEWAudio/Video Processing Service
License: MIT License
Audio/Video Processing Service
License: MIT License
Hi Pion Team,
I'm trying to integrate AVP with SFU server. The example requires an AVP server running and a working client. I do not understand the example very well. I have two questions, maybe you can help me:
1 - The AVP run in parallel to a working SFU connection or I need to connect the AVP in the mid of the call.
2 - I started the SFU server and a JS client and the video works well. But when the AVP example requires two params in the client, described above:
go run examples/save-to-webm/client/main.go $SESSION_ID
What is SESSION_ID? Is It the string used in the JS client connection?
const clientLocal = new IonSDK.Client("<SESSION ID>", signalLocal);
After the client started, it requires a Track ID. Is it the string specified by SFU server after connection or a random value?
[INFO] [163][peer.go][Trickle] => peer ckgzrgi2g00014y85nwhyvfsm trickle
[DEBUG] [68][webrtctransport.go][func1] => Peer ckgzrgi2g00014y85nwhyvfsm got remote track id: 1504e531-02fc-4f37-828d-9facd80b37c1 mediaSSRC: 3726792027 rid : streamID: cp8oDQ2q4qmjrIb5yE58eKTU1YYT2WlncTqj
The log above is reported by SFU server, I specified the value 1504e531-02fc-4f37-828d-9facd80b37c1
in AVP client but it does not work. Nothing happens.
I have a need for many parties in a mult party video conference to be able to speak in different languages but see the Text in your own language.
One way is to do it client side like is done here: https://github.com/felixjunghans/google_speech
The other way is to do it on the server in the ion-AVP.
Can i get some advise on the best approach, so that i can target my efforts correctly for making PR's ??
DO it client side or server side ?
Is there support for subtitles in the Flutter Webrtc ( https://github.com/flutter-webrtc/flutter-webrtc ) ? flutter-webrtc/flutter-webrtc#432
Everything is running on a local machine, so network issues are unlikely to be the reason.
Config:
AudioMaxLate: 100
VideoMaxLate: 200
MaxLateTimeMs: 1000
PLICycle: 1000
A readable file of the correct duration is produced.
An invalid webm
is produced.
Can't be opened by browser or VLC.
The binary size is always exactly 440 bytes.
I suspect this is because of some concurrency issue, but I can't put my finger on it. Isn't it supposed to work even if a peer has joined for a short duration?
If this helps, ffmpeg
conversion produces this error:
[matroska,webm @ 0x7ff34b815a00] Duplicate element
[matroska,webm @ 0x7ff34b815a00] Element at 0x41 ending at 0x34e01 exceeds containing master element ending at 0x13ed
[matroska,webm @ 0x7ff34b815a00] Duplicate element
[matroska,webm @ 0x7ff34b815a00] Element at 0x50 ending at 0x6f4e10 exceeds containing master element ending at 0x13fc
[matroska,webm @ 0x7ff34b815a00] Element at 0x5f ending at 0x4f0ecb3 exceeds containing master element ending at 0x140b
test.webm: End of file
Thanks for the help!
Add a default element that records the sessions to disk.
Currently ion-avp doesn't have any default elements, so it can't be run as a standalone server, you have to customise it. I suspect the most common element people want is the ability to record the session to disk (like webmsaver example). I would like to propose we make this a built-in behavior, accessed by a new gRPC endpoint.
I use ion-avp as a library, with my own copy of webmsaver example. I suspect most people do.
Proposal would add:
Record(sfu address, session id, track id, recording config)
.RecordConfig{format: "webm", filename: string, audio: off|mono|stereo, video: off|on, buffersize: int}
Is this something ion-avp would be interested in? Would it work as described?
Thanks!
Thanks for such a wonderful project. I am exploring the pion project and have a use-case, where i want to add background sound to the audio sent by the clients. What would be the best place to do it? Shall I do it on the turn server, sfu or avp? Is there any similar example to my use case?
[2020-10-06 15:14:15.760] [INFO] [87][webrtctransport.go][func1] => Got track: 2ad6244c-f139-4fe0-8049-afe7073ff6af
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xa6e8d8]
goroutine 190 [running]:
github.com/pion/ion-avp/pkg.(*WebRTCTransport).Process.func1(0xef8aa0, 0xc000659ce0)
/go/pkg/mod/github.com/pion/[email protected]/pkg/webrtctransport.go:189 +0x78
github.com/pion/ion-avp/pkg.NewWebRTCTransport.func1(0xc0003ae820, 0xc000205740)
/go/pkg/mod/github.com/pion/[email protected]/pkg/webrtctransport.go:99 +0x371
created by github.com/pion/webrtc/v3.(*PeerConnection).onTrack
/go/pkg/mod/github.com/pion/webrtc/[email protected]/peerconnection.go:424 +0x10d
exit status 2
Everything is running on a local machine, so network issues are unlikely to be the reason.
Config:
AudioMaxLate: 100
VideoMaxLate: 200
MaxLateTimeMs: 1000
PLICycle: 1000
WebmSaver
.Video is playing without gaps.
Video has significant gaps (~5s) for a test 1-minute video. This drives dashjs
crazy and it tries to fetch the same segment in an infinite loop. Shaka player doesn't even try to play the video after the first gap.
However, if loaded as a plain video file in chrome or VLC, the video looks completely fine.
I know it's a complicated issue, but I have no clue about encoding algorithms and what could go wrong here. Since sometimes I am able to produce a perfectly fine streamable dash compatible file, my guess would be that the conversion is fine. The problem must be in the original writer, which produces gaps in files. Maybe race conditions or network out-of-order delivery.
I wonder if you have any ideas on what could be wrong or at least where to start. I can provide a docker file to reproduce the issue on any generated file with one command.
Thanks!
Hi Pion Team and thanks for this wonderful package!
Please tell me how can I use ffmpeg
for processing audio and video stream in real time?
This is stated on the ion page
For example, ffmpeg
accepts a stream and transmits it to the RTMP channel.
Thanks!
I would like to be able to attach an Element
to all tracks, as they arrive.
The use case is to record audio in an audio conferencing server, where we need all the tracks in a session (everyone's audio on the call). The tracks would be mixed later (with ffmpeg probably). This could apply to video too, where the post-processing might for example display all the recorded videos in a grid.
I think there's a few ways this could be done:
Pass track id "ALL" to WebRTCTransport.Process.
Create a new element for every existing track, and save the element id to attach to any new tracks.
A a new method in server/avp.go, for example ProcessAll
, which does same as 1.
A callback from WebRTCTransport (called from sub.OnTrack
probably) that tells us there's a new track.
We would then call Process
normally, now that we know the track id. The callback would be attached when the session is created, before any tracks are created. This is quite flexible, leaving most of the decision making in the caller.
Something else? Maybe there's already a way to do this I'm missing?
Would this be something ion-avp would accept? If yes what design would fit best? I have a very hacky version of 1. working now, but I'd like to do it right before I try to submit it.
Thanks!
Hi everyone and thank you for writing such amazing package
i use save-to-webm example to save rtp packets on disk but i have a serious problem
sometimes it takes too long for key frame to be generated.
in Chromium browser Version 87.0.4280.66 it takes about 10 seconds. it isn't such a big deal for chromium but for safari 11.1 on ios it takes about 2 minutes until the key-frame recognized and writing to disk initiated!
`
// Read VP8 header.
videoKeyframe := (sample.Data[0]&0x1 == 0)
if videoKeyframe {
// Keyframe has frame information.
raw := uint(sample.Data[6]) | uint(sample.Data[7])<<8 | uint(sample.Data[8])<<16 | uint(sample.Data[9])<<24
width := int(raw & 0x3FFF)
height := int((raw >> 16) & 0x3FFF)
if s.videoWriter == nil || s.audioWriter == nil {
// Initialize WebM saver using received frame size.
s.InitWriter(width, height)
}
}
if s.videoWriter != nil {
s.videoTimestamp += sample.Samples
t := s.videoTimestamp / 90
if _, err := s.videoWriter.Write(videoKeyframe, int64(t), sample.Data); err != nil {
fmt.Println(err)
}
}`
I also tried to initiate the writing process with some fixed width and height but the output file could not be used
I'm wondering if there is any workaround or any hacks to address this issue?
There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.
Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.