Coder Social home page Coder Social logo

Comments (24)

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
[deleted comment]

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
For the issue 30 I cannot see how it is possible to have link problem unless 
you are mixing static and shared libraries in the "output" folder.

Doubango: There is no limitation on the number of consumers, producers and 
sessions you can have at the same time. 
tinyWRAP: Has been developed in "quick-and-dirty" mode. The goal was to have 
basic proxy (jni<->native) functions to manage doubango libraries from 
IMSDroid. If you want to have full access to all features, then you should 
directly use doubango API or update the proxy functions. However, in you case 
the solution is easy.
You should:

===1===
Remove static instance of the producers and consumers which are:  
ProxyAudioProducer::instance, ProxyVideoProducer::instance, 
ProxyAudioConsumer::instance and ProxyVideoConsumer::instance. 

===2===
Create two new classes: MediaManager and MediaManagerCallback. MediaManager 
instance will replace the static instances you have removed in step 1. You can 
have something like this:
class MediaManager
{
public:
    MediaManager(MediaManagerCallback callback);
    virtual ~MediaManager();

public:
#if !defined(SWIG)
    static uint64_t requestNewId();

    void addProducer(ProxyProducer*);
    void addConsumer(ProxyConsumer*);
    void removeProducer(uint64_t id);
    void removeConsumer(uint64_t id);

    static MediaManager* instance;
    static MediaManager* getInstance();
#endif


private:
    MediaManagerCallback callback;

    tsk_list_t* consumers;
    tsk_list_t* producers;
};

MediaManagerCallback will inform us when the media layer (tinyMEDIA) 
create/destroy new consumers or producers. You can have something like this:
class MediaManagerCallback
{
public:
    MediaManagerCallback() {  }
    virtual ~MediaManagerCallback() {}

    // New consumer object will be created unless you return a non-zero code
    virtual int OnConsumerCreated(uint64_t id, twrap_media_type_t type) { return -1; }
    // New producer object will be created unless you return a non-zero code
    virtual int OnProducerCreated(uint64_t id, twrap_media_type_t type) { return -1; }

    virtual int OnConsumerDestroyed(uint64_t id) { return -1; }
    virtual int OnProducerDestroyed(uint64_t id) { return -1; }
};

===3===
Change twrap_producer_proxy_audio_t, twrap_producer_proxy_video_t, 
twrap_consumer_proxy_audio_t and twrap_consumer_proxy_video_t structures by 
adding an id field. Example:

typedef struct twrap_producer_proxy_audio_s
{
    TDAV_DECLARE_PRODUCER_AUDIO;

    tsk_bool_t started;
    uint64_t id;
}
twrap_producer_proxy_audio_t;

===4===
In the java code, remove all static declaration of the audio/video 
consumers/producers and create a static MediaManager object with its callback 
function.
For example, each time the media layer (tinyMEDIA) create a new audio producer, 
"twrap_producer_proxy_audio_ctor" function will be called and 
"twrap_producer_proxy_audio_dtor" when destroyed. This is true both consumers 
and producers. In each constructor use "MediaManager::requestNewId()" to get a 
unique id and call "MediaManagerCallback::OnConsumerCreated()" or 
"MediaManagerCallback::OnProducerCreated" to alert the java world that a new 
object have been created.You should also call "MediaManager::addProducer()" or 
"MediaManager::addConsumer()" to take a reference to the object. 
For example, when an audio producer is started, 
"twrap_producer_proxy_audio_start(tmedia_producer_t* self)" function is called. 
Cast the parameter as "twrap_producer_proxy_audio_t" to get the id of the 
producer then look into the MediaManager to get a proxy producer with the same 
id. Once you have the ProxyAudioProducer object, call 
ProxyAudioProducer::start() callback to alert the java world that the producer 
has been started. Do the same for all other consumes and producers.
If you really want support for multiple audio/video sessions I can change the 
code.  
Off course only one session will be able to use the camera at the same time.

One fullscreen video call using H263 + PCMA/PCMU take at least 48% of the CPU 
on an old Android 1.5 (528MHz). At least 20% of the CPU is used by yuv2rgb 
functions from libswscale. This is very high because there is no ARM 
optimization in this library as we have in libavcodec and libavutil. This can 
be divided by 3 if we use ARM Assembler code from theorarm 
(http://wss.co.uk/pinknoise/theorarm/). I'm working on the issue for an 
IPTV/VoD solution. Off course the final implementation will be part of IMSDoid 
;)

Original comment by [email protected] on 19 Aug 2010 at 12:44

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Thanks for the information. No need to do it yourself, save your precious time 
and spend it on something more useful for everyone, I'll implement these 
changes myself in the nearest future, as soon as I have the promised prototype 
device to test everything on. I will keep you informed on the progress.

Original comment by [email protected] on 19 Aug 2010 at 2:16

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
 First of all, terminology. Asterisk (*) means either producer or consumer.
 Now, the problem -
 I have implemented some of the features listed here, but still I need your help on the subject. Attached is the code I've implemented, in case you need some more information. Anyway, lets consider the following scenario - I am using the new architecture. I have SWIGged out the MediaManager and MediaManagerCallback classes(attached, as well as their C++ parts), so I override the callback class methods in the child class, MyMediaManagerCallback(attached). Whenever twrap_proxy_*_ctor method is called, I call On*Created method of the callback but I always end up seeing TSK_DEBUG_ERROR message from the original C class, like my child class is never called - only parent. This is problem number one. 
 Another problem is in Proxy*.cxx code methods called twrap_proxy_*_prepare(they are incorrect at the moment). When we had a singleton Proxy*::instance, we just checked if its NULL, and if its not, we did the take* call, which attached the producer or consumer to its Proxy* instance. Now, we have several instances of Proxy* classes, and we have several calls to twrap_proxy_*_prepare, so we actually do not know to which particular Proxy* we need to attach our consumer or producer structure. Yes, probably we could have known that if the callbacks worked, did you create them exactly for that purpose? If so, how could I make them work? Please, help. I hope you have some time to check the code, there isn't much anyway

Original comment by [email protected] on 16 Sep 2010 at 4:10

Attachments:

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
As you haven't published the SWIG interfaces I cannot figure out why the 
callback is not called.
Just make sure that you have registered your callback as SWIG directory feature 
(see SipStack.i for example). Also don't forget to change "autogen.sh" to 
automatically patch the generated file for Android OS.
Please add "%include <stdint.i>" in your interface file to avoid creating SWIG 
types (SWIGTYPE_p_uint64_t).

Original comment by [email protected] on 17 Sep 2010 at 12:04

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Sending you the requested SWIG interface and autogen file, everything has been 
patched in the same fashion like your stuff, as you see in the above letter, 
the java interfaces of MediaManager and MediaManagerCallback are created 
succesfully. %include will be added, but it cannot be the reason of not calling 
callback, right?

Original comment by [email protected] on 17 Sep 2010 at 3:49

Attachments:

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
You MUST use pointers to pass callback function to the native code from java.
You should have:
{{{
MediaManager(MediaManagerCallback* callback);
}}}
Instead of:
{{{
MediaManager(MediaManagerCallback callback);
}}}
Attached the common directory I use for my tests. And all work as expected. I'm 
using C#.NET but java should work.

Original comment by [email protected] on 18 Sep 2010 at 12:01

Attachments:

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
I was surprised to see MediaManagerCallback instead of its pointer in your 
original code, but thought it must be some special case and just implemented 
what you said. I should have asked when I had the suspicions, and yes, current 
code you gave me makes callbacks work, that means I can continue the 
implementation of my testing stuff. Thanks for help! You saved a lot of time, 
again.

Original comment by [email protected] on 20 Sep 2010 at 9:19

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
The code in comment 2 doesn't use SWIG and it's written for Windows Phone 7 
(using C# delegates). WP7 mandates unicode for marshaling.
Do you know if there is a simple way to add support for unicode in SWIG?

Original comment by [email protected] on 21 Sep 2010 at 5:15

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
No, unfortunately this is the first project I've worked with that uses SWIG, so 
I am definitely less informed than you on the subject. BTW, I added that 
include you suggested to get rid of SWIGTYPE_p_uint64_t usage, they were 
replaced with java.math.BigInteger, unfortunately, library stopped working 
right after showing Java.Lang.NosuchMethodError right after. It took me many 
hours to realize what was the reason since I also added some code myself, and 
thought it was my problem. As soon as I commented out that include the library 
started working again. Some signatures misinterpretation or incompatibility, I 
think.

Original comment by [email protected] on 21 Sep 2010 at 2:06

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
MsrpMessage::getByteRange(int64_t,int64_t,int64_t) works fine. Perhaps there is 
a problem on the unsigned type. 
int64_t should be marshaled as long and uint64_t as BigInt. Very strange.

Original comment by [email protected] on 21 Sep 2010 at 2:18

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Continuing the research, I have found one more limitation/singleton 
possibility. I have created two SipService instances, and made them work in 
parallel. They managed to register together fine, but as soon as I made 
simultaneous SIP call, the problems appeared - I started receiving the 
following error:

09-22 15:56:53.851: WARN/tinyWRAP(4822): **WARN: function: "tsk_fsm_act()" 
09-22 15:56:53.851: WARN/tinyWRAP(4822): file: "src/tsk_fsm.c" 
09-22 15:56:53.851: WARN/tinyWRAP(4822): line: "190" 
09-22 15:56:53.851: WARN/tinyWRAP(4822): MSG: State machine: No matching state 
found.

One of theories was this was due to single state machine being used with both 
instances of SipService, and the debug info in tsip_dialog_invite_client_init 
showed that this function is being called only once, even if I make two calls. 
If you know why, please, shed some light for me to save a lot of time on 
research. A little hint on how to fix this would be also greatly appreciated. 
Thanks in advance.

Original comment by [email protected] on 22 Sep 2010 at 4:09

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
There is no relation between the warning message and the SipService. The actual 
code allow simultaneous SIP call. The only issue is the Audio/Video 
producers/consumers singletons.
You don't need two SipService instances to implement Video Conferencing. If you 
want to register more than one identity, then just create as much as 
MyRegistrationSession objects.
The state machine instances (tsk_fsm_t) are per SIP dialog. In most cases you 
can safely ignore this warning.

Original comment by [email protected] on 22 Sep 2010 at 4:50

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
I am sorry, either me or you might be missing something. I am not a guru of the 
imsdroid code, so let's suppose it's me who's wrong, but here are the questions 
that I have after your response:
1. You say I don't need two SipService classes to implement more than one 
identity, but how do you imagine the work of your library when two simultaneous 
calls are made? Am I supposed to call CallSession_callAudioVideo twice with the 
same state machine? I am confused.
2. Yes, a SIP dialog is not closely related to SipService, but I have two of 
those, I make two different CallSession_callAudioVideo calls from each one, and 
somehow only one Dialog init function is being called. And I see that there is 
an unexpected state in a state machine. From those two facts I suppose that 
there might be only one state machine no matter how many dialogs are created, 
is this possible? Because with one client it is not noticeable - you have no 
simultaneous work of the dialogs, you register, THEN make a call, THEN hang up 
etc. So I asked you if that is possible, two SipService instances are just in 
case, it's safer that way.

Original comment by [email protected] on 23 Sep 2010 at 7:24

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
I re-checked all information and it appears this was my mistake - I mean 
2.(dialog not being created), as for 1.(One SipSession for two calls) I am 
still confused, thanks for your time.

Original comment by [email protected] on 23 Sep 2010 at 10:58

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Off course you cannot (SIP violation and doesn't follow any IETF/3GPP 
specification).
- To call "bob" then "alice": 1)create a sip session 2)call bob 3)hangup 
4)reuse the same session to call "alice"
- To call "bob" and "alice" at the same time: 1) create two sessions and call 
them

To implement video conferencing you MUST use a single sip session and the 
participants must be embedded in the INVITE body using "multipart/mixed" (or 
related) content type. To add/remove a participant just send a REFER message.
If you look in the doubango source code you will see that there is an empty 
file named "tsip_dialog_invite.conf.c". This file will contain MMTel CONF 
implementation as per 3GPP TS 24.147.

An example of SIP INVITE for video conference:

Content-Type: multipart/mixed;boundary="boundary1"
Content-Length: (…)
--boundary1
Content-Type: application/sdp
v=0
o=- 2987933615 2987933615 IN IP6 5555::aaa:bbb:ccc:ddd
s=-
c=IN IP6 5555::aaa:bbb:ccc:ddd
t=0 0
m=audio 3456 RTP/AVP 0
m=video 6060 RTP/AVP 126
a=rtpmap:126 theora/90000\r\n
--boundary1
Content-Type: application/resource-lists+xml
Content-Disposition: recipient-list
<?xml version="1.0" encoding="UTF-8"?>
<resource-lists xmlns="urn:ietf:params:xml:ns:resource-lists"
xmlns:cp="urn:ietf:params:xml:ns:copycontrol">
<list>
<entry uri="sip:[email protected]" />
<entry uri="sip:[email protected]"/>
</list>
</resource-lists>
--boundary1--

Original comment by [email protected] on 23 Sep 2010 at 12:38

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
To monitor the conference session, SUBSCRIBE to "conference" event package as 
per RFC 4575(http://www.ietf.org/rfc/rfc4575.txt).

Original comment by [email protected] on 23 Sep 2010 at 12:42

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Thanks for your information but the case is, my initial target was not to 
implement a video conference itself, but more to measure CPU memory and other 
load during the decoding/encoding process to understand if implementation of 
such thing is economically justified for the current generation devices. And 
for that to take place I decided to slightly modify your code, to make two 
registrations instead of one, and place two simultaneous calls instead of one 
and see how program/library manages to decode two streams at once. I had some 
success implemented your multi-producer/consumer architecture and fixed all the 
bugs and may share the results - at this current moment it cannot decode two 
streams at once, unfortunately, I am getting the progressive delay and CPU load 
around 92%-102% according to top, currently researching the possibilities to 
shorten the decode time, as you said, first thing to see is YUV transcoding 
optimization. Did you have any success porting the theorarm?

Original comment by [email protected] on 4 Oct 2010 at 8:46

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
And again my mistake - after going through profiler I have found some 
architecture mistakes in consumers, after fixing which the 2-screen video 
worked ok with 30-38% CPU load on Galaxy S, a pretty good result I must say.

Original comment by [email protected] on 4 Oct 2010 at 1:22

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Hi,
About the CPU usage, you can compare "doubango" with other mobile videophones 
(e.g. fring) but you will end with the same result ;) There is an issue (delay) 
for CIF video but this is caused by Android media system.
For the video conferencing, I have worked with many companies (cisco, colibria, 
...) and they all use an Ad-Hoc conference factory. This means that event if 
you are making a conference with 10 participant (or more) the mobile endpoint 
will only encode/decode one video stream. All ten participants will 
send/receive RTP packets to/from the server. It's up to the server to mix the 
video stream (mosaic).

Even on a 4GHz PC you cannot make a video conferencing with 10 participants (or 
more) and this is why these systems are based on an Ad-Hoc factory.
As both Colibria and Cisco implementation are not open you can check CONFIANCE 
(http://confiance.sourceforge.net/).

Original comment by [email protected] on 4 Oct 2010 at 1:51

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Hi Volkov.r..,
   Would you mind sharing the final code, with which you got the performance numbers? Thx

Original comment by [email protected] on 4 Mar 2011 at 11:22

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
Would you mind pointing out the architectural mistakes in consumers, as in my 
testing on a 600MHz processor, from IMSDroid, I find H263 videos @ CIF going 
out at less than 8 fps, and H264 videos going out at less than 4 fps, when 
running without a decoder. And when I run a 3-party call by placing the first 
receiver on hold, & making a second call to 2nd receiver, the performance was 
even shoddier, even though there was only 1-encode going on at any given time.
    I think the whole community could benefit from the performance improvements you've identified.

Original comment by [email protected] on 7 Mar 2011 at 6:55

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
@[email protected]
It's nearly impossible to have smooth CIF (>15fps) video on AMRv5TE device. 
Only QCIF video will work without delay. For CIF or higher you should use an 
ARMv7-a device (Samsung Galaxy S, HTC Hero, ...).
Here is the patch from "[email protected]": 
http://code.google.com/p/doubango/issues/detail?id=21

Original comment by [email protected] on 7 Mar 2011 at 7:52

from imsdroid.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 20, 2024
I installed Open IMS core server in my Linux machine and I installed the 
application on my phone imsdroid. Unfortunately, I have not been able to 
establish the connection between the server and imsdroid OpenIMS core. what can 
I do to connect? thank you

Original comment by [email protected] on 7 Feb 2013 at 5:42

from imsdroid.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.