robotology-legacy / wysiwyd Goto Github PK
View Code? Open in Web Editor NEWWhat You Say Is What You Did
Home Page: http://wysiwyd.upf.edu
What You Say Is What You Did
Home Page: http://wysiwyd.upf.edu
Hi all,
somehow the != comparator does not work when looping over beliefs in wrdac/opcEars. I did a quick fix in 0756ec3 to check for
size > 0
But it should be done properly, otherwise someone else might run into this issue sooner or later.
Would be great if someone from WP5 could have a look at it :).
Thanks,
Tobias
We're experiencing the following cmake error while checking wysiwyd
compilation by means of AppVeyor
CI tool (thanks @Tobias-Fischer for spotting). This error showed up only recently, while compilation was used to go fine with the same binaries in the past.
Make Error at c:/Program Files (x86)/robotology/icub-1.1.16/cmake/icub-config.cmake:29 (include):
include could not find load file:
__NSIS_ICUB_INSTALLED_LOCATION__/lib/ICUB/icub-export-install-includes.cmake
Call Stack (most recent call first):
CMakeLists.txt:13 (find_package)
Currently travis
and appveyor
CI systems fail to compile wysiwyd
project because idl
auto-generated files contain references to the new yError()
routine and thus yarp
headers need to be updated.
travis
and appveyor
have been temporarily disabled: remember to enable them again, once new yarp
binaries will be released.
Right now, pasar is relying on an agent called partner
for the pointing detection etc. - this needs to be extended.
Anyone from UPF willing to take care of this? @jypuigbo @clement-moulin-frier @sockchinglow
I've just caught the following warnings/errors on windows:
c:\dev\wysiwyd\main\src\modules\reservoirhandler\src\reservoirhandler.cpp(802): warning C4715: 'reservoirHandler::nodeTestAP' : not all control paths return a value
c:\dev\wysiwyd\main\src\modules\reservoirhandler\src\reservoirhandler.cpp(922): warning C4715: 'reservoirHandler::nodeTrainSD' : not all control paths return a value
c:\dev\wysiwyd\main\src\modules\reservoirhandler\src\reservoirhandler.cpp(687): warning C4715: 'reservoirHandler::nodeTrainAP' : not all control paths return a value
c:\dev\wysiwyd\main\src\modules\reservoirhandler\src\reservoirhandler.cpp(584): warning C4715: 'reservoirHandler::nodeModality' : not all control paths return a value
c:\dev\wysiwyd\main\src\modules\reservoirhandler\src\reservoirhandler.cpp(361): warning C4715: 'reservoirHandler::nodeType' : not all control paths return a value
reservoirHandler.obj : error LNK2019: unresolved external symbol "class yarp::os::ConstString __cdecl yarp::os::operator+(char const *,class yarp::os::ConstString const &)" (??Hos@yarp@@YA?AVConstString@01@PBDABV201@@Z) referenced in function "public: virtual bool __thiscall reservoirHandler::configure(class yarp::os::ResourceFinder &)" (?configure@reservoirHandler@@UAE_NAAVResourceFinder@os@yarp@@@Z)
21>C:\dev\wysiwyd\main\build\bin\Release\reservoirHandler.exe : fatal error LNK1120: 1 unresolved externals
Hi,
there is an issue with iol2opc. Thus far, adding objects in the OPC works nicely. However, if an object gets out of the scene, m_present
is still set to true
. We therefore need to find a way to
a) detect once an object is removed from the scene
b) find this object in the OPC and set m_present=false
The issue is that we don't keep track which objects are added to the OPC. I think we need to create a list with known objects, and keep this list up to date. What do you think, @pattacini?
Best,
Tobias
Apparently working fine for other (even inside proactiveTagging)
Just a reminder for me to fix that ๐
./Max
See title
@maxime-petit could you address this problem and the cmake warning reported below?
CMake Warning at conf/wysiwydFindDependencies.cmake:33 (find_package):
By not providing "FindRTABMap.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "RTABMap", but
CMake did not find one.
Could not find a package configuration file provided by "RTABMap" with any
of the following names:
RTABMapConfig.cmake
rtabmap-config.cmake
Add the installation prefix of "RTABMap" to CMAKE_PREFIX_PATH or set
"RTABMap_DIR" to a directory containing one of the above files. If
"RTABMap" provides a separate development package or SDK, be sure it has
been installed.
Call Stack (most recent call first):
CMakeLists.txt:48 (include)
CMake Warning at conf/wysiwydFindDependencies.cmake:34 (find_package):
By not providing "FindPCL.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "PCL", but
CMake did not find one.
Could not find a package configuration file provided by "PCL" with any of
the following names:
PCLConfig.cmake
pcl-config.cmake
Add the installation prefix of "PCL" to CMAKE_PREFIX_PATH or set "PCL_DIR"
to a directory containing one of the above files. If "PCL" provides a
separate development package or SDK, be sure it has been installed.
Call Stack (most recent call first):
CMakeLists.txt:48 (include)
@stephane-lallee @gregoire-pointeau
I've seen that the keyboardBabbling
branch has been somehow merged into master, even though it does not result from the graph. Could we thus safely remove it?
Hello we have a problem that I guess comes from conflicts between ARE
and attentionSelector
, when using the subsystem_ARE
.
What we want to achieve is: the robot looks in front of him at first. There are 4 objects: top/bottom right/left. The robot looks at each object for 2 s
, then points at the object.
(The idea is then with reservoir computing to give the reservoir just the joint information of the robot corresponding to the look, and it should predict the joint sequence for the point.)
So we do the following:
bool bSuccess = iCub->look(sObject);
Time::delay(1.0);
bSuccess &= iCub->point(sObject, bHand, true);
but the iCub does not look at first the object. When it is supposed to point at it, it looks correctly.
when I ask only to look iCub->look(object)
, the robot doesn't move. When I ask to point iCub->point(object)
, the iCub looks at the object and point correctly.
The ICubClient::look()
remaps onto a tracking behavior of the gaze (by heritage); we should foresee a simple look service (i.e. no tracking) instead.
Hi all,
as we all know, we have started using stereo vision to get the position of objects. So far, we directly employ the 3D point of the object acquired from SFM
to push/point/grasp the object. However, as far as I understood @pattacini correctly, we should "correct" the 3D point using depth2kin
.
The actual implementation is straightforward (see https://github.com/robotology/grasp/blob/master/precision-grasp/graspSynthesis/src/precisionGrasp.cpp#L1133). However, I am not sure yet where the correction should be made. Directly in subsystem_ARE
? But what if someone still uses the reactable - then the 3D coordinates should be used without correction. Shall we introduce a boolean flag to specify whether the point should be corrected or not? If so, should there be a default (which one)?
I think this is especially important for the stuff you are doing at Inserm, @AnneLaureM, @gregoire-pointeau.
Best, Tobias
Someone should find out how our demo advances the work done in the ALEAR project.
libraries/cvz/include/cvz/core/ICvz.h
error: โDBL_MAXโ was not declared
error: 'DBL_MIN' was not declared
Someone should find out how our demo advances the work done in the CHRIS project: http://www.chrisfp7.eu/
Attention selector should run in the demo, so the iCub appears more alive .. :).
This comes with the problem that some objects might disappear from the view.
Hi @MagnusJohnsson
Thanks for committing the code to the repo.
Could you please address the following warnings we have during compilation?
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(214): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(261): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(262): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(377): warning C4305: 'initializing' : truncation from 'double' to 'float'
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(443): warning C4305: 'initializing' : truncation from 'double' to 'float'
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(525): warning C4305: 'initializing' : truncation from 'double' to 'float'
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(607): warning C4305: 'initializing' : truncation from 'double' to 'float'
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(688): warning C4305: 'initializing' : truncation from 'double' to 'float'
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(808): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(809): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(810): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(811): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(864): warning C4305: 'initializing' : truncation from 'double' to 'float'
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(892): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(893): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(894): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
C:\dev\wysiwyd\main\src\modules\attentionRelated\verbRec\src\verbRec.cpp(895): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
You should also replace tabs
with spaces
for a better indentation.
Cheers
It seems that presently it is not possible to export multiple levels trees of libraries headers. This applies specifically to wrdac
where we have include branches like clients
, knowledge
and subsystems
. Building is not problematic, whereas installing triggers this issue.
wysiwyd
.wrdac
(it's too big indeed), adhering to the above standard and even considering splitting it in slimmer libraries./cc @Tobias-Fischer
In the response()
method of the ABM, some functions have quite a few parameters (e.g. triggerStreaming has 5 parameters). So far, they have to be in a precise order to be applied. Instead, we should use parameter pairs to specify parameters.
E.g. instead of
triggerStreaming 1 0 1 1.0 0
which has no meaning at all to someone not used to the code, we should allow
triggerStreaming (useRealiCub 1) (includeAugmented 1)
which is much easier to understand.
This can be easily implemented using .find()
of the Bottle
class: http://wiki.icub.org/yarpdoc/bottle_2main_8cpp-example.html
In the meeting with Yiannis, he suggested that one possible way to extend the demo might be to proactively tag actions. I.e. the iCub should be able to discover the names of the actions he is able to do. We should start with looking, pointing and pushing.
The currently available module is outdated, and the new module relies on ROS. Thus, we need to find a way to compile this module in the proper place if ROS is installed.
Hi all,
The current segmentation algorithm to extract blobs is based on luma-chroma segmentation. As we have found in the integration meeting in Sheffield, this method does not work particularly well. This is a shame, as the object recognition itself works very nicely.
One alternative with more promising results would incorporate the 3D information we get from the SFM module. The Point Cloud Library offers several algorithms to achieve what we want. The most promising seems this: http://pointclouds.org/news/2012/04/03/new-object-segmentation-algorithms/
Some other algorithms also implemented in PCL are described here:
As we rely on a quite noisy source, we might want to use outlier removal prior to the clustering:
If someone else knows other libraries which can achieve similar results, please add them below.
Ugo @pattacini, what do you think?
Best, Tobi
Someone should find out how our demo advances the work done in the POETICON and POETICON++ projects: http://www.poeticon.eu/ and http://poeticon.csri.gr/
Not sure why this is happening. No problem on windows...
Linking CXX executable ../../../../bin/frontalEyeField
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPort<yarp::os::Bottle>::Unvectorize(std::vector<double, std::allocator<double> >)': frontalEyeField.cpp:(.text+0x420): multiple definition of
cvz::core::ModalityBufferedPortyarp::os::Bottle::Unvectorize(std::vector<double, std::allocator >)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0x1e0): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPort<yarp::sig::ImageOf<yarp::sig::PixelRgb> >::Unvectorize(std::vector<double, std::allocator<double> >)': frontalEyeField.cpp:(.text+0x510): multiple definition of
cvz::core::ModalityBufferedPortyarp::sig::ImageOf<yarp::sig::PixelRgb >::Unvectorize(std::vector<double, std::allocator >)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0x2d0): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPort<yarp::sig::ImageOf<float> >::Unvectorize(std::vector<double, std::allocator<double> >)': frontalEyeField.cpp:(.text+0x6d0): multiple definition of
cvz::core::ModalityBufferedPortyarp::sig::ImageOf::Unvectorize(std::vector<double, std::allocator >)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0x490): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPort<yarp::sig::Sound>::Unvectorize(std::vector<double, std::allocator<double> >)': frontalEyeField.cpp:(.text+0x7c0): multiple definition of
cvz::core::ModalityBufferedPortyarp::sig::Sound::Unvectorize(std::vector<double, std::allocator >)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0x580): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPort<yarp::sig::ImageOf<yarp::sig::PixelRgb> >::getVisualizationFromVector(std::vector<double, std::allocator<double> >)': frontalEyeField.cpp:(.text+0x940): multiple definition of
cvz::core::ModalityBufferedPortyarp::sig::ImageOf<yarp::sig::PixelRgb >::getVisualizationFromVector(std::vector<double, std::allocator >)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0x690): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPort<yarp::sig::Sound>::Vectorize(yarp::sig::Sound*)': frontalEyeField.cpp:(.text+0x1be0): multiple definition of
cvz::core::ModalityBufferedPortyarp::sig::Sound::Vectorize(yarp::sig::Sound_)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0x6f0): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPortyarp::sig::ImageOf<float >::Vectorize(yarp::sig::ImageOf<float>_)': frontalEyeField.cpp:(.text+0x1fa0): multiple definition of
cvz::core::ModalityBufferedPort<yarp::sig::ImageOf >::Vectorize(yarp::sig::ImageOf)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0xad0): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In functioncvz::core::ModalityBufferedPortyarp::sig::ImageOf<yarp::sig::PixelRgb >::Vectorize(yarp::sig::ImageOfyarp::sig::PixelRgb_)': frontalEyeField.cpp:(.text+0x21d0): multiple definition of
cvz::core::ModalityBufferedPortyarp::sig::ImageOf<yarp::sig::PixelRgb >::Vectorize(yarp::sig::ImageOfyarp::sig::PixelRgb_)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0xd10): first defined here
CMakeFiles/frontalEyeField.dir/frontalEyeField.cpp.o: In function cvz::core::ModalityBufferedPort<yarp::os::Bottle>::Vectorize(yarp::os::Bottle*)': frontalEyeField.cpp:(.text+0x2470): multiple definition of
cvz::core::ModalityBufferedPortyarp::os::Bottle::Vectorize(yarp::os::Bottle)'
CMakeFiles/frontalEyeField.dir/main.cpp.o:main.cpp:(.text+0xfc0): first defined here
collect2: ld returned 1 exit status
Hi,
recently we started getting warnings about changing to travis container builds rather than using their legacy infrastructure. Apparently, it's faster, allows caching, ... (see http://docs.travis-ci.com/user/migrating-from-legacy/?utm_source=legacy-notice&utm_medium=banner&utm_campaign=legacy-upgrade).
The only drawback is that sudo
cannot be used. We use sudo
to
icub-common
packagedeb http://www.icub.org/ubuntu precise contrib/science
in their new whitelist (https://github.com/travis-ci/apt-source-whitelist), which in turn needs a GPG key, which as far as I know does not exist yet.yarp
, icub-common
, icub-main
and kinect-wrapper
. To avoid using sudo
there, we can set the corresponding environment variables and install the packages there rather than in /usr/bin
or wherever they get installed by defaultI don't think it's urgent to switch, but probably it needs to be done sooner or later. Ugo @pattacini, do you know who is in charge of the deb
packages and could create a GPG key? I guess that's a long missing feature anyways ..
Best,
Tobi
@gregoire-pointeau, please fix the following compilation errors I've found in Windows:
..\..\..\..\..\src\modules\abm\abmReasoning\src\abmReasoning.cpp(1396): error C3861: 'ptr_fun': identifier not found
..\..\..\..\..\src\modules\abm\abmReasoning\src\abmReasoning.cpp(1403): error C3861: 'ptr_fun': identifier not found
..\..\..\..\..\src\modules\abm\abmReasoning\src\abmReasoning.cpp(1408): error C3861: 'ptr_fun': identifier not found
..\..\..\..\..\src\modules\abm\abmReasoning\src\abmReasoning.cpp(1416): error C3861: 'ptr_fun': identifier not found
..\..\..\..\..\src\modules\abm\abmReasoning\src\abmReasoning.cpp(1435): error C3861: 'ptr_fun': identifier not found
..\..\..\..\..\src\modules\abm\abmReasoning\src\abmReasoning.cpp(1441): error C3861: 'ptr_fun': identifier not found
Further, be careful since I've re-grouped all ABM related modules under one dedicated folder.
During BCBT and IROS we have detected in Barcelona an annoying bug/issue when using speech recognition.
When we try to run the demo, before running any of the other modules, speechRecognizer
already has a grammar loaded, which detect yes, no, let's play pong, etc. (Like the sentences used in EFAA?). This interferes with other modules that use speech recognition. Has anyone faced this same problem or is something particular here in Barcelona? Do you know if there is any method in speech recognition that automatically loads a grammar? If not, I will check which could be the interfering module.
Both, ears
and proactiveTagging
are error-prone to the first issue, as they segFault when they receive an unexpected sentence from the ASR or they output invalid values, depending on the case. I guess that this issue is easy to deal with. I can take care of it if you agree that the changes should be done directly on these modules.
Let me know what you think of it.
Jordi
@gregoire-pointeau was saying that we should extend the demo so the iCub is interacting with two different agents, and each agent might use different objects.
@matejhof reports that the module in subject does not compile on a linux machine because it wants to include windows.h.
@stephane-lallee, please fix the bug accordingly: if the module is supposed to run only on windows, put a guard in the corresponding cmake file.
During the integration, we talked about having the possibility to re-train the names of objects. For example, the iCub might think an object is called apple
, but its actual name is carrot
. This might be due to a) wrongly detected pointing in the training phase or b) apple
might be a fantasy name of the iCub.
A simple implementation might be to just use the already existing code, but extend it such that objects which do not start with unknown_object
are considered. Then, we should be able to say iCub, point the carrot
. Then, iCub responds I don't know which of these objects is the carrot
. We then point the apple
(which is actually the carrot
). The iCub then says: Now I know that is the carrot
.
A more advanced version would change the name step for step, so rather than one-shot learning we would need to teach the iCub the name of an object over several iterations.
Hi all,
as discussed on slack, we have issues with the OPCClient
. See for example this code snippet which leads to an invalid pointer, as checkout()
deletes all the Entity*
, and allocates new memory for new Entity*
objects:
Object* o = opc->addOrRetrieveEntity<Object>("test");
opc->checkout();
cout<<"This does not work, because checkout changes the pointers!!!" << o->name()
My comments on slack were as follows:
I think we have a fundamental issue. Every time EntitiesCache is used, a list of pointers to entities is returned. However these pointers might change over time (I am not sure why). So if a module uses EntitiesCache, and then iterates over the entities, the actual pointers to the entities might have already changed. And then we see corruptions, as we've seen yesterday in #bcbt-2015. So we need to figure out why the pointer to the entity with the same OPC id changes over time. You might think: Why not always use EntitiesCacheCopy? This is only feasible if a module is only reading attributes from the entities. As soon as an entity needs to be changed, you must use EntitiesCache. Otherwise you change the values of the copy, but not the entity itself.
In the meantime, as above, I have found that the checkout()
is the evil method changing the pointers.
I guess the following should be discussed:
What should checkout()
do by default? Should updateCache
be really true
by default? It deletes all pointers, and retrieves the entities fresh from the OPC. Isn't what we really want an update of the entity, i.e. the entity itself is kept (thus no deleting of any pointers etc.), but the attributes of the entity are updated with the content which is stored in the OPC?
Best,
Tobi
Hi all,
Thus far we have employed the Reactable to get the transformation matrix from the robot root reference system to the Kinect reference system. It would be great if we replace this by a solution which employs stereo vision.
I see two possible solutions:
ICP
for that, and employ prior knowledge (an approximate transformation is known from the geometrical layout of the Kinect pole).Best, Tobi
We had the issue in the integration meeting that agentDetector
always detected an agent called unknown
which is present, although there is another agent called partner
which is the agent we actually want to recognize.
Further, I suggest to rename unknown
to unknown_agent
.
Hi everyone! Just a general intro, my name is Daniel Camilleri and I've just started a job as a research assistant with the WYSIWYD team in Sheffield. Any help with the problem below would be appreciated.
After successfully managing to segment the object using iol
, I was trying the rest of the interaction to teach the iCub what the object is by saying "what is this?". Now, as the second screenshot shows, iol_main.lua
is receiving an acknowledgement that the text has been received and recognized but I did not get any sort of response from the icub.
There must be a link of some sort missing or malfunctioning but I can't figure out which. Does anyone have an idea as to how I can debug this problem? Thanks!
Hi,
recently @gregoire-pointeau pushed some changes which require C++11 support. After some reading, I found that under Visual Studio the (partly) C++11 support is automatically enabled. However, when compiling with gcc, one needs to enable the C++11 support by setting the proper compiler flags.
I'd be happy to enable them, it's really easy. Then, we can go ahead with the beauty of C++11.
I'll do this in the integration meeting next week if no one has concerns. gcc now supports C++11 for a while, i.e. even gcc 4.6 shipped under Ubuntu 12.04 supports the core features.
Best,
Tobi
Hi !
I've added the long-named-module "interpersonalDistanceRegulator", with a small thrift interface to get familiar with it.
It seems to compile fine, however, if I keep the cmake option "allow_IDL_generation" then cmake throw me a weird error during the generation (not the config):
Configuring done
CMake Error at src/modules/interpersonalDistanceRegulator/CMakeLists.txt:24 (add_executable):
Cannot find source file:
include/interpersonalDistanceRegulator_IDL.h
If anybody has an insight on that I would take it. I ran out of understanding...
Someone should find out how our demo advances the work done by Bugmann (e.g. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4415179&tag=1) and Cangelosi (https://scholar.google.co.uk/citations?user=NyoHewcAAAAJ&hl=en&oi=ao)
@gregoire-pointeau the branch dev-efaa/inserm
is marked as stale: can I delete it?
When a human starts an interaction, we should use SAM to see whether we already know the face.
We should be able to ask the iCub to move a bodypart, e.g. iCub, move your index
. Then, the iCub responds I don't know my index, can you please touch it
.
This extension is similar to the extension which was made at BCBT2015 for objects.
The iCub should look at an object before asking what the name of the object is. Same for agents.
So far, we need to wait several seconds before a sentence is recognized when the grammar of speechRecognizer
is changed. I think a clean solution would be to interrupt the loop in which the speechReognizer
is stuck when called with recognizeGrammerLoop
(or something like that), so a new grammar can be loaded faster. That seems to be cleaner than reducing the time of the loop.
@maxime-petit: Let's tackle this together.
Rather than just using the pointing action, we should be able to use perspective taking to infer which object the human is looking at. This can be used to increase the salience of an object, and thus for the proactive tagging demo.
During the London integration meeting, we had the issue that touchDetector
is crashing right at the moment when we try to detect the touch in the proactiveTagging
module. We need to debug why and fix the issue.
Any volunteers?
Someone should find out how our demo advances the work done in the iTalk project: http://www.italkproject.org/
Hi guys,
The repository is rather big (120MB), which made cloning quite a long process. I've made an analysis and it appears that files in wysiwyd/main/app/visionRelated/headPoseEstimator/conf/trees/
are taking around 40MB. This doesn't explain the 120MB but at least the main part of it.
Do these files really need to be versionized?
@gregoire-pointeau suggested to use the narrative handler in the proactive tagging demo, so the iCub can talk about past events. Please add some more details what you imagine @gregoire-pointeau
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.