When installed, this optional package provides a specfile that
directs gcc (and g++ or gfortran) to automatically:
search for includes in $PREFIX/include
link libraries in $PREFIX/lib
set RPATH to $PREFIX/lib
use RPATH instead of the newer RUNPATH
This package is intended to aid usability of the compiler
toolchain as a replacement for system-installed compilers.
It should not be used in recipes. Use the 'compiler()'
jinja function as described on
https://conda-forge.org/docs/maintainer/knowledge_base.html#dep-compilers
It is possible to list all of the versions of _openmp_mutex available on your platform with conda:
conda search _openmp_mutex --channel conda-forge
or with mamba:
mamba search _openmp_mutex --channel conda-forge
Alternatively, mamba repoquery may provide more information:
# Search all versions available on your platform:
mamba repoquery search _openmp_mutex --channel conda-forge
# List packages depending on `_openmp_mutex`:
mamba repoquery whoneeds _openmp_mutex --channel conda-forge
# List dependencies of `_openmp_mutex`:
mamba repoquery depends _openmp_mutex --channel conda-forge
About conda-forge
conda-forge is a community-led conda channel of installable packages.
In order to provide high-quality builds, the process has been automated into the
conda-forge GitHub organization. The conda-forge organization contains one repository
for each of the installable packages. Such a repository is known as a feedstock.
A feedstock is made up of a conda recipe (the instructions on what and how to build
the package) and the necessary configurations for automatic building using freely
available continuous integration services. Thanks to the awesome service provided by
Azure, GitHub,
CircleCI, AppVeyor,
Drone, and TravisCI
it is possible to build and upload installable packages to the
conda-forgeanaconda.org
channel for Linux, Windows and OSX respectively.
To manage the continuous integration and simplify feedstock maintenance
conda-smithy has been developed.
Using the conda-forge.yml within this repository, it is possible to re-render all of
this feedstock's supporting files (e.g. the CI configuration files) with conda smithy rerender.
feedstock - the conda recipe (raw material), supporting scripts and CI configuration.
conda-smithy - the tool which helps orchestrate the feedstock.
Its primary use is in the construction of the CI .yml files
and simplify the management of many feedstocks.
conda-forge - the place where the feedstock and smithy live and work to
produce the finished article (built conda distributions)
Updating ctng-compilers-feedstock-feedstock
If you would like to improve the ctng-compilers-feedstock recipe or build a new
package version, please fork this repository and submit a PR. Upon submission,
your changes will be run on the appropriate platforms to give the reviewer an
opportunity to confirm that the changes result in a successful build. Once
merged, the recipe will be re-built and uploaded automatically to the
conda-forge channel, whereupon the built conda packages will be available for
everybody to install and use from the conda-forge channel.
Note that all branches in the conda-forge/ctng-compilers-feedstock-feedstock are
immediately built and any created packages are uploaded, so PRs should be based
on branches in forks and branches in the main repository should only be used to
build distinct package versions.
In order to produce a uniquely identifiable distribution:
If the version of a package is not being increased, please add or increase
the build/number.
If the version of a package is being increased, please remember to return
the build/number
back to 0.
I am trying to understand - what all tests and validation are recommended to bring up conda compiler for new architecture.
There are test stage in meta.yaml for gcc,g++ , others in ctng-compilers-feedstock and below test coverage in build.sh.
When we were working with the Continuum folks during the development of conda-build 3, my colleagues and I suggested rather strongly that -I $PREFIX/include and the 4 required linking arguments -Wl,-rpath,$PREFIX/lib -Wl,-rpath-link,$PREFIX/lib -Wl,-disable-new-dtags -L $PREFIX/lib be built into the configuration of GCC so that when someone does a conda install gcc and then tries to build something with gcc it operates like one would expect, treating the conda environment like the 'local' include and lib path and setting things appropriately to use RPATH everywhere.
I don't recall why Ray and Michael stuck with setting *FLAGS in the activation scripts. To their credit, we have been able to get very far with that. However, to continue polishing the end-user experience, I feel that we should still continue to use the activation scripts for building packages but that their use should be optional for users who just want to compile their own code using conda to install the prerequisites of a C/C++ development environment.
Since you guys moved away from ctng, it is probably easier to do this now. Also, it looks like LLVM/clang has the ability to define configuration files and accomplish the equivalent behavior as the spec-file chicanery required to get the RPATHS by default in GCC. Again, I think it's time to make this work.
@conda-forge/ctng-compilers folks, I'm interested to hear your feedback. I have recipe changes building locally that I'm going to Q/A a bit before opening a PR.
UnsatisfiableError: The following specifications were found to be incompatible with a past
explicit spec that is not an explicit spec in this operation (_openmp_mutex):
- _openmp_mutex[build=*_gnu]
To reproduce, create an environment using pytorch-cuda-dev.yaml file (see below), activate, and run conda install _openmp_mutex=*=*_gnu.
I also tried inserting _openmp_mutex=*=*_gnu to the yaml with no success.
What seems to work is a manual reset of the libgomp.so.1->libomp.so link to libgomp.so.1->libgomp.so:
Solution to issue cannot be found in the documentation.
I checked the documentation.
Issue
It seems that Ubuntu 22.04 doesn't play well with openGL from conda-forge.
mamba create --name dev --channel conda-forge --override-channels matplotlib --yes
mamba activate dev
python -c "from matplotlib import pyplot as plt; plt.plot(range(5)); plt.show()"
libGL error: MESA-LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: iris
libGL error: MESA-LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: iris
libGL error: MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast
While the plot shows up, other hardware accelerate stuff (like using vispy) doesn't work. I didn't know where else to post this so I figured I would post it to matplotlib since it seems to have rather high visibility into these issues.
Solution to issue cannot be found in the documentation.
I checked the documentation.
Issue
On most system package managers, libstdc++ provides an automatic integration into gdb by placing a file into share/gdb/auto-load that loads the pretty-printers that are present in share/gcc-x.x.x/python/libstdcxx. This integration seems to be missing from conda, which makes debugging binaries that were compiled with conda's C++ compiler unnecessarily tedious.
Two possible solutions to this would be 1. creating a custom gdbinit file that sets the auto-load safe-path and loads the file or 2. creating the file in the expected location for libstdc++.so in the first place, which can be seen by running gdb on a simple C++ test program:
> gdb ./test
> set debug auto-load on
> run
...
auto-load: Attempted file "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28-gdb.py" exists.
auto-load: Loading python script "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28-gdb.py" by extension for objfile "/lib/x86_64-linux-gnu/libstdc++.so.6".
auto-load: Matching file "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28-gdb.py" to pattern "/usr/lib/debug"
auto-load: Not matched - pattern "/usr/lib/debug".
auto-load: Matching file "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28-gdb.py" to pattern "/usr/share/gdb/auto-load"
auto-load: Matched - file "/usr/share/gdb/auto-load" to pattern "/usr/share/gdb/auto-load"
...
Not sure if this has been reported upstream, as I am able to reproduce this only with this combo: -fno-plt + -fuse-ld=gold. If I remove either, the problem goes away.
I saw this problem happen with mongodb and mysql. @hmaarrfk found that downgrading the compiler version to 8 resolves the issue, so I ended up doing that on both these feedstocks.
Are there any plans to bump the glibc version this package compiles against to 2.17? Or possibly adding the compile flag: --enable-libstdcxx-time
When libstdc++ is compiled against a glibc older than 2.17 there is a compatibility defines set (_GLIBCXX_USE_CLOCK_GETTIME_SYSCALL) in the header so that the built libstdc++ library forces std::chrono::system_clock::now() to use the syscall version of clock_gettime instead of one that can be accelerated by vdso. (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59177)
The effect of this any code built in an environment using this package has a fairly large performance penalty when running querying the system clock this way.
For Linux targets, if clock_gettime is not used then the [time.clock] implementation
will use a system call to access the realtime and monotonic clocks, which is
significantly slower than the C library's clock_gettime function.
Issue:
When using the compilers package to compile code, I get linking errors against libquadmath.so.0 on linux seemingly with 9.3.0 build 11 which I did not get a week ago with a slightly different environment.
Issue:
I use the cxx-compiler (gcc 9.3, linux-64, micromamba) to compile and sanitize (with -fsanitize=address) some code, which used to work well.
Now, installing cxx-compiler installs gcc 9.3 but libgcc-ng 11.1.0 and I get a linker error
…
copying glmnet_python/loadGlmLib.py -> build/lib.linux-x86_64-3.8/glmnet_python
running build_ext
/home/uwe/miniconda3/envs/test-build7/bin/../lib/gcc/x86_64-conda-linux-gnu/7.5.0/../../../../x86_64-conda-linux-gnu/bin/ld: cannot find -lquadmath
collect2: error: ld returned 1 exit status
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-h1hnpswb/setup.py", line 41, in <module>
setup(name='glmnet_python',
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/site-packages/setuptools/__init__.py", line 161, in setup
return distutils.core.setup(**attrs)
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 223, in run
self.run_command('build')
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-req-build-h1hnpswb/setup.py", line 22, in run
self.build_extension(ext)
File "/tmp/pip-req-build-h1hnpswb/setup.py", line 35, in build_extension
subprocess.check_call(['gfortran', ext.input] + gfortran_args, cwd=self.build_temp, env=env)
File "/home/uwe/miniconda3/envs/test-build7/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['gfortran', '/tmp/pip-req-build-h1hnpswb/glmnet_python/GLMnet.f', '-fPIC', '-fdefault-real-8', '-shared', '-o', '/tmp/pip-req-build-h1hnpswb/glmnet_python/GLMnet.so']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for glmnet-python
Running setup.py clean for glmnet-python
Failed to build glmnet-python
Solution to issue cannot be found in the documentation.
I checked the documentation.
Issue
x86_64-conda-linux-gnu-c++: internal compiler error: Segmentation fault signal terminated program as
Please submit a full bug report,
with preprocessed source if appropriate.
See https://github.com/conda-forge/ctng-compilers-feedstock/issues/new/choose for instructions.
make: *** [/home/twise/projects/moose/framework/build.mk:150: /home/twise/projects/moose/framework/build/unity_src/postprocessors_Unity.x86_64-conda-linux-gnu.opt.lo] Error 1
/* File test.cpp */
int rand();
template<typename T>
struct s
{
int count() { return rand(); }
};
template<typename v>
int f(s<v> a)
{
int const x = a.count();
int r = 0;
auto l = [&](int& r)
{
for(int y = 0, yend = (x); y < yend; ++y)
{
r += y;
}
};
l(r);
}
template int f(s<float>);
int main()
{
}
Which triggers the gcc bug:
$ g++ -c test.cpp
test.cpp: In instantiation of 'f(s<v>)::<lambda(int&)> [with v = float]':
test.cpp:14:16: required from 'struct f(s<v>) [with v = float]::<lambda(int&)>'
test.cpp:14:10: required from 'int f(s<v>) [with v = float]'
test.cpp:24:24: required from here
test.cpp:16:24: internal compiler error: in maybe_undo_parenthesized_ref, at cp/semantics.c:1705
for(int y = 0, yend = (x); y < yend; ++y)
^~~~
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://gcc.gnu.org/bugs/> for instructions.
Notice that the same compilation error happens with the following commands
Issue: With the environment below, I cannot compile a simple test program. Once I conda update -all to a new env, getting the latest builds of the compilers, it works.
@conda-forge/ctng-compilers maintainers, I'm debugging a segfault issue in a simple libgomp program. I have a feeling -fsanitize=threads might help me pinpoint the issue. I'm reading to use TSan, libgomp needs to be recompiled with --disable-linux-futex (something you would never want to have in a production build).
Since I'm going to muck around with building this locally for myself, if you guys think it would be useful, I can do one of the following:
PR changes to the buildscripts that make it 'easy' for someone to do a local build and document how to do it. I'm thinking that I possibly create a new variable in conda_build_config.yaml that we only put one value in and then
Build out an alternate version of _openmp_mutex that can be installed with something like conda install openmp_mutex=*=*_gnu_nofutex and also probably do the same for the llvm openmp implementation because it has to be build explicilty with -DLIBOMP_TSAN_SUPPORT=ON. It might be better to just name them _gnu_debug and _llvm_debug.
I'm happy to just do my experiment of rebuilding libgomp for debug but since I'm doing the work, if others would find it useful, I'm willing to share. I don't think I've seen a cfep regarding debug builds of things but maybe I should go look again.
I see where it was defined higher in build.sh when originally added in #56. Any objections to taking it back out @isuruf ? I was getting confused trying to figure out where it was coming from.
{standard input}: Assembler messages:
{standard input}:186007: Warning: end of file not at end of a line; newline inserted
{standard input}:186154: Error: unknown pseudo-op: `.alig'
aarch64-conda-linux-gnu-c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
The latest version is 4.5.1, but it's suffixed with llvm. There's an older version 4.5.0 but only for gnu. I have been installing the latest one with llvm, but wondering if I should be using the older one.
I am compiling a C++ program with gcc and openmp (libgomp) which makes me feel like I should be using 4.5.0_gnu. Is there a reason why gnu does not have a 4.5.1 like llvm? Are you required to compile with clang to use 4.5.1_llvm?
Solution to issue cannot be found in the documentation.
I checked the documentation.
Issue
If I try to dlopen with RTLD_DEEPBIND from a Python environment libgomp 13.*, I obtain a segfault. A simple reproducer is just the command python -c "import ctypes; import os; ctypes._dlopen(os.environ['CONDA_PREFIX']+'/lib/libgomp.so.1', os.RTLD_DEEPBIND)" :
A C/C++ program is used for dlopen, without passing by the python interpreter
libgomp <= 12 is used
The backtrace is the following:
(gdb) bt
#0 initialize_env () at ../../../libgomp/env.c:2062
#1 0x00007ffff7fc947e in call_init (l=<optimized out>, argc=argc@entry=3, argv=argv@entry=0x7fffffffc1f8, env=env@entry=0x7fffffffc218)
at ./elf/dl-init.c:70
#2 0x00007ffff7fc9568 in call_init (env=0x7fffffffc218, argv=0x7fffffffc1f8, argc=3, l=<optimized out>) at ./elf/dl-init.c:33
#3 _dl_init (main_map=0x555555b8e620, argc=3, argv=0x7fffffffc1f8, env=0x7fffffffc218) at ./elf/dl-init.c:117
#4 0x00007ffff7e09c85 in __GI__dl_catch_exception (exception=<optimized out>, operate=<optimized out>, args=<optimized out>)
at ./elf/dl-error-skeleton.c:182
#5 0x00007ffff7fd0ff6 in dl_open_worker (a=0x7fffffffb910) at ./elf/dl-open.c:808
and seems to indicate that something is going wrong around https://github.com/gcc-mirror/gcc/blob/releases/gcc-13.2.0/libgomp/env.c#L2062 . I have a few ideas to investigate this further, like debugging the value of the environ global variable, but I am not sure when I will have time for this, so in the meanwhile I opened this issue.
What's left to be done there? I don't see gcc 9.x packages on anaconda.org yet. Perhaps I'm looking in the wrong place. Could I or someone on my team help push it the rest of the way?
Solution to issue cannot be found in the documentation.
I checked the documentation.
Issue
When installing a Fortran compiler (either via the fortran-compiler package as user or via {{ compiler('fortran') }} in a recipe), the C-include file include/ISO_Fortran_binding.h is not installed. Missing this file prevents the compilation of modern Fortran-C interfaces. It also prevents autotools from recognizing the Fortran compiler (gfortran) being F2008-compliant. For example, this seems to be the reason, why MPICH does not build the mpi_f08 module (conda-forge/mpich-feedstock#84) preventing the usage of the modern MPI-interface in Fortran programs.
If I build gcc myself, the include file gets installed. (I use the --enable-languages=c,c++,fortran option).
Issue: all the lib*.so files installed by libgcc-ng, and libgomp, have an unpatched RPATH which means (for example), linking against libasan.so can yield linker errors (e.g. unable to find the libstdc++.so.6 that libasan requires, as it doesn't look in $ORIGIN etc).
$ for f in *.so; echo $f; readelf -d $f | grep 'rpath'; end
libasan.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803321475/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib/../lib]
libatomic.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803324264/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib]
libcc1.so
0x000000000000000f (RPATH) Library rpath: [$ORIGIN/.]
libgcc_s.so
readelf: Error: Not an ELF file - it has the wrong magic bytes at the start
libgomp.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803324264/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib]
libitm.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803324264/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib]
liblsan.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803321475/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib/../lib]
libquadmath.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803324264/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib]
libstdc++.so
libtsan.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803321475/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib/../lib]
libubsan.so
0x000000000000000f (RPATH) Library rpath: [/home/conda/feedstock_root/build_artifacts/gcc_compilers_1632803321475/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib/../lib]
Right now we have separate gfortran builds for osx in different feedstocks. If we are moving to a gcc feedstock, we should probably archive those builds and move them to the gcc feedstock. One thing I don't understand is if we need two builds of gfortran, one meant for the clang stack and the other mean for a gcc stack for osx.
This becomes an issue when compiling libraries that depend on system headers located in /usr/include, /usr/local/include, etc. Is there a way to configure the header directories available to Conda gcc, so that the system header directories could be picked up in addition to or instead of those in the Conda environment?
For context, this issue is coming up for me when trying to compile UCX with InfiniBand/RDMA support (which requires system headers in /usr/include/rdma) in an environment that has gcc_impl_linux-64 as an implicit dependency.
I'm using clang as a compiler (from the clangxx conda package) and also include gxx_impl_linux-64 in my environment to link against libstdc++. Unfortunately, I cannot see proper pretty printers for stdlib types when debugging the binary with gdb.
Usually distros will provide a *-dbgsym package for these, but I'm not sure how to get them from conda-forge.
Note that there's also #105 so I'm not sure if my issue is the same, or only related (since clang and gcc may have different behavior here).
GCC 14 finally finished <chrono> support (search for P0355R7 here).
We'll need to set _GLIBCXX_ZONEINFO_DIR=$PREFIX/share/zoneinfo (see here), to point to our own tzdata, assuming relocation works correctly for the embedded path there. Otherwise we could patch the zoneinfo_dir_override function to directly load the path from $CONDA_PREFIX.