I commited this too quickly last time and it's been breaking a whole
bunch of stuff. Until I managed to get multilib working in a sane
way, I'm just going to go ahead and turn it off by default. You can
still pass "--with-arch", but doing so while also passing things like
"--enable-multilib" or "--disable-atomics" might have unexpected
results.
Doing so prevents -m32/-m64 from having any effect. This patch is just
a workaround; we should fix the runtime handling of --with-arch so that
specifying -m32/-m64 on the command line takes priority.
I have no idea why, but the installed "sed" wrapper never terminates
on RHEL. If I don't use the wrappen the the tools build fine, so this
just uses the already set autoconf variables to attempt to determine
if the system sed/awk are gsed/gawk and if they are then this avoids
using the wrappers.
There's a few oddities here:
* I have no idea why the sed wrapper fails, as it seems super safe.
* I haven't run into any awk problems, but I figured I'd treat it the
same as it isn't any harder.
* We shouldn't have to support 10 year old distributions.
Hopefully this doesn't break anyone's builds...
Some systems don't have things like GMP, MPFR, and friends installed.
Rather than requiring them to be installed as root, this uses a bit of
GCC's built-in functionality to download these libraries and build
them along with GCC.
This allows the tools to build on Red Hat, but only if you install
newer host tools (I have GCC, make, and texinfo).
buildbot.dabbelt.com recently failed because it was unable to fetch
the downloaded files fast enough because mirrors.kernel.org was being
slow. This patch allows me to cache the downloaded files somewhere to
avoid these sorts of network problems.
This should also be generally useful for other people who build the
toolchain regularly.
I thought this was a problem with the build, but it turns out that was
actually related to my bad (and now reverted) patch from earlier this
week.
Regardless, this is the canonical way to build a cross compiler so now
that it's done I don't see a reason not to use it.
We used to allow GCC to fail the stage1 build because it's impossible
to build the full GCC without glibc. This builds just the parts we
need ("all-gcc" and "all-target-libgcc") and then install them.
This should make build failures a bit easier to debug, as now nothing
should fail.
Wherever possible, rely on facilities native to GNU make rather than
invoke external utilities superfluously. Change the shell to /bin/sh
instead of bash since the latter's heftier feature set is unnecessary.
The -r option to cp(1) is marked obsolescent by POSIX; use -R instead.
Select between curl(1), wget(1), and ftp(1) through autoconf.
Wherever possible, automatically follow HTTP location redirects and
enable passive FTP mode.
Explicitly instruct tar(1) to read from stdin since this is far from
universal behavior if unspecified: The default file is /dev/sa0 in
FreeBSD, /dev/rst0 in NetBSD and OpenBSD, etc.
Tests for program presence, such as those formerly embedded in the
top-level Makefile for gawk and gsed, are better suited for autoconf.
Note that it is not sufficient to merely export AWK and SED environment
variables, as packages may still directly invoke awk(1) and sed(1) with
non-standard features independent of the autotools framework.
Wrapper scripts therefore remain necessary, although these are now
generated by the configure script to avoid hard-coded paths.
Do not assume the existence of /bin/bash on all systems.
I keep forgetting that the default option is to build newlib. This
flag changes the default target to Linux, so I don't keep forgetting
to type "make linux".
Some shared binutils source code throws some harmless looking errors
with newer GCC versions. This patch stops failing the binutils build
whenever these errors show up.
Both Gentoo and LFS set this flag, so I think it's sane.