Coder Social home page Coder Social logo

vasanthaganeshk / unladen-swallow Goto Github PK

View Code? Open in Web Editor NEW
0.0 0.0 0.0 60.91 MB

Python2 Jit compiler

Home Page: https://code.google.com/archive/p/unladen-swallow/

License: Other

Shell 0.82% Python 50.62% Makefile 1.15% C 38.83% HTML 0.52% Groff 0.05% Batchfile 0.05% CSS 0.02% C++ 6.41% Visual Basic 0.01% PLSQL 0.06% R 0.01% Objective-C 0.10% Vim Script 0.02% Prolog 0.01% Emacs Lisp 0.38% M4 0.08% Assembly 0.81% DIGITAL Command Language 0.03% Inno Setup 0.05%

unladen-swallow's People

Watchers

 avatar

unladen-swallow's Issues

Get full stack traces from LLVM JITed code

Steps to reproduce:
1. Modify a C function in Python that you know will be called from LLVM so
that it will segfault.  (printf("%d\n", *((int*)NULL));)
2. Build Python --with-pydebug.
3. Load up the binary in GDB.
4. Run a python script that will call the bad function with -L2.
5. Examine the backtrace.

The stack trace should show your C function and maybe some frames above it,
and then the heap address of some LLVM generated code with no name for it.
 If you're lucky, eventually the trace will get past there and make it back
down to main.  If you're unlucky, you get nothing.

As a work around, you can set the environment variable
PYTHONLLVMFLAGS="--disable-fp-elim" to your python binary, and that should
give you a full stack trace that is only missing information for LLVM
generated code.  Eventually we'd like to enable this by default for debug
builds, but right now there's no easy way to toggle that option.  The right
thing to do would probably be to submit a patch to LLVM so we can toggle
that option.

Original issue reported on code.google.com by [email protected] on 8 Jun 2009 at 5:50

test_distutils leaks references

regrtest.py -R:: test_distutils leaks 103-105 references per run. This
problem is present in both trunk and release-2009Q1-maint, though trunk
leaks 103 references and release-2009Q1-maint leaks 105 references.

Original issue reported on code.google.com by collinw on 15 May 2009 at 7:03

perf.py needs to track memory usage

perf.py should grow a --memory option that will profile memory usage of the
different benchmarks. Memory usage would be summarized in the same way that
running time is.

Original issue reported on code.google.com by collinw on 18 May 2009 at 8:50

Look into faster string concentration for templating languages?

A central part of templating languages is the way they combine multiple strings 
into one string 
during template runtime.

The current most often used pattern seems to be a BufferIO class like the one 
from spitfire 
(http://code.google.com/p/spitfire/source/browse/trunk/spitfire/runtime/template
.py):

class BufferIO(list):
  write = list.append

  def getvalue(self):
    return ''.join(self)

The constrains of templating languages have even more minimal requirements than 
the built-in 
list used above, namely they care only about appending of multiple Unicode 
strings and a one-
time combination of those into a single Unicode string. As no slicing, 
retrieval of individual 
values or anything like it is required, I'm wondering if a more fine-tuned 
version of this could 
result in noticeable differences for the spitfire test cases. Using a 
collections.deque instead of a 
list here, doesn't produce any real difference.

Original issue reported on code.google.com by [email protected] on 28 Mar 2009 at 2:29

LLVM LOAD_CONST implementation should skip co_consts

Currently, the LLVM implementation of the LOAD_CONST opcode indexes into
the code object's co_consts tuple. Since these are *constants*, the
generated machine code should just load the object's address, skipping
co_consts entirely.

Original issue reported on code.google.com by collinw on 29 May 2009 at 3:52

cannot install 4Suite-XML

I couldn't install 4Suite-XML with either the 2009Q1 version or the trunk
version. It complains about KeyError: 'EXTENDED_ARG'. Attached is the
output from running easy_install 4Suite-XML.

Original issue reported on code.google.com by [email protected] on 4 May 2009 at 3:59

Attachments:

cPickle.Unpickler objects cannot be reused

The Unpickler doesn't re-read the from the file object when it runs out of
data, meaning Unpicklers can't be reused in long-lived streaming pickle
sessions. This also needs tests.

Original issue reported on code.google.com by collinw on 26 Mar 2009 at 3:28

test_bsddb3 is flaky

test_bsddb3 fails sometimes, passes other times, resulting in spurious
failure reports. There may be patches upstream to make it more stable.

Original issue reported on code.google.com by collinw on 15 May 2009 at 9:50

Measure: function call overhead

We need to have more accurate measurements for the total time taken from
when we start a CALL_FUNCTION opcode to the time when the body of the
function starts executing. This should work regardless of whether we're
dispatching to an interpreted function or an LLVM function; C functions can
be fudged a bit (right before calling the function pointer?)


- Data should be stored in a vector and stats printed out at Python-shutdown.
- This should include whether the execution is in machine code or the
interpreter.
- Should be a special build (controlled by #ifdef's).
- Use TSCs?

Original issue reported on code.google.com by collinw on 30 May 2009 at 1:46

sqlite3 fails to import

It looks like sqlite3 fails to import due to not being able to track down 
the old so file. I have attached the strace -fi but I had to ****** some 
stuff for security/privacy reasons

Original issue reported on code.google.com by [email protected] on 11 May 2009 at 7:46

Attachments:

test_urllib2_localnet leaks references

regrtest.py -R:: test_urllib2_localnet currently leaks three references per
run. This problem is present in Unladen Swallow trunk and
release-2009Q1-maint, as well as mainline CPython trunk.

I've found the line that triggers the leak and am tracking down the cause.

Original issue reported on code.google.com by collinw on 15 May 2009 at 7:02

Persist LLVM IR to .pyc files

Currently, the compiler emits both CPython bytecode and LLVM IR and
attaches both to all code objects it creates. However, the marshal format
only understands bytecode and drops the LLVM IR on the floor when it saves
.pyc files. Disabling .pyc files entirely seems to slow down regrtest to
unacceptable levels so r323 just checks that the LLVM IR is present when it
tries to run through the LLVM JIT and raises a SystemError if it's not
present. We'll need to fix this before the JIT is really a viable way to
run things.

Original issue reported on code.google.com by [email protected] on 24 Mar 2009 at 7:34

LLVM-generated functions need to switch threads

Currently, LLVM-generated functions don't yield the GIL to other threads
(unless they're doing IO) the way the interpreter does. Among other
problems, this greatly reduces threading fairness when running with -L[012].

evlogimenos has agreed to work on this (and signal handling, since they're
related).

Original issue reported on code.google.com by collinw on 28 May 2009 at 12:30

Inline calls to Py_INCREF/Py_DECREF

In our LLVM code, Py_INCREF/Py_DECREF are function calls instead of the
nice speedy macros that the interpreter can take advantage of. These calls
should be inlined in the LLVM IR.

jyasskin is already working on this.

Original issue reported on code.google.com by collinw on 27 May 2009 at 10:15

Measure: JIT compilation time

We should measure how long execution blocks for when sending a code object
to be compiled/optimized by LLVM. This will be useful for proving that
offloading compilation/optimization to worker threads is valuable.

- These times should be stored in a vector and statistics displayed at
Python-shutdown.
- This should be a special build (ie, controlled by #ifdef's).
- Timing done using TSCs?

Original issue reported on code.google.com by collinw on 30 May 2009 at 1:39

Simple functions are way bigger than they need to be. Fix that.

We currently know of a couple inefficiencies:

1. Because we always load the stack pointer, LLVM can't optimize away the
unwind loop, even in simple functions that, say, return once from the outer
scope. For non-generators, we can do better.
2. The block stack is stored in the frame, which again prevents LLVM from
optimizing away most of the unwind block.

This issue will hold a summary of this kind of issue, but we may split it
up when we get around to optimizing these things.

Original issue reported on code.google.com by [email protected] on 21 Apr 2009 at 10:01

Compiling large functions runs out of memory

The following program:

import sys
copies = int(sys.argv[1])
print "Running %d copies..." % copies

longexpr = 'x = x or ' + '-x' * 2500
code = ('''
def f(x):
''' + '''    %s
''' * copies + '''
    # the expressions above have no effect, x == argument
    while x:
        x -= 1
    return x
''') % ((longexpr,) * copies)
exec code
print f(5)



demonstrates non-linear memory use and running times with Unladen Swallow
r573. This was extracted from test_compile, which takes more than 4GB of
memory which exhausts a 32-bit address space.

The memory use below is from watching the "real memory" column in the apple
activity monitor and taking the highest number I saw.

$ time ./python.exe -S -L0 ./use_lots_of_memory.py  1
Running 1 copies...
0

real    0m37.979s
user    0m36.972s
sys 0m0.602s
memory 174MB

$ time ./python.exe -S -L0 ./use_lots_of_memory.py  2
Running 2 copies...
0

real    1m15.750s
user    1m13.479s
sys 0m1.368s
memory 491MB  (delta 317)

$ time ./python.exe -S -L0 ./use_lots_of_memory.py  3
Running 3 copies...
0

real    2m6.118s
user    2m2.631s
sys 0m2.328s
memory 944MB  (delta 453)

$ time ./python.exe -S -L0 ./use_lots_of_memory.py  4
Running 4 copies...
0

real    5m59.303s
user    3m10.135s
sys 0m14.223s
memory 1500MB  (delta 556)


Watching memory use of the "3" case, it seems to rise to ~30-40MB up to and
during SimplifyCFG, then rise by ~3 MB per second through CodeGenAndEmitDAG
up to about 150MB, and then rise by ~50MB per second through
LiveVariables::runOnMachineFunction up to about 750MB. LiveIntervals seems
to account for most of the rest of the memory use.

Original issue reported on code.google.com by [email protected] on 22 May 2009 at 2:59

Tune hotness function

We should add a sys.optimize decorator so that known-important functions
don't have to hit the hotness threshold before we optimize them. This would
be particularly useful for the slowspitfire benchmark: the function that
does all the work will never be "hot" by our current heuristics.

Currently, if we force compilation via -L2, slowspitfire shows a ~10% gain
over 2009Q1, but -L2 hurts start-up time. A decorator like this is similar
to the way Spitfire uses Psyco (ie, explicitly flagging functions for
optimization).

Original issue reported on code.google.com by collinw on 5 Jun 2009 at 11:47

Review vmgen branch for merge into trunk

Purpose of code changes on this branch:
To use vmgen to generate the main interpretation loop in PyEval_EvalFrameEx().

When reviewing my code changes, please focus on:
Readability, understandability, and anything stupid I did.

After the review, I'll merge this branch into:
/trunk

Original issue reported on code.google.com by [email protected] on 31 Jan 2009 at 1:00

f_lasti is probably broken

We have altered the definition of f_lasti: pray we don't alter it any
further; but regardless, we should make it work again for pdb.

Original issue reported on code.google.com by [email protected] on 22 Apr 2009 at 3:26

Add a regex benchmark suite

Python currently doesn't have a good-quality regex benchmark suite that can
be run automatically, have statistics drawn from it, etc. We need such a
thing before starting work on regex performance.

Possible resources:
- Fredrik Lundh's original benchmarks for SRE:
http://mail.python.org/pipermail/python-dev/2000-August/007797.html
- V8's JS regex benchmarks:
http://v8.googlecode.com/svn/data/benchmarks/v3/regexp.js

Ideally we would do a search of the Python regexes in Google Code Search or
similar corpus and distill some representative set from them. V8's may be
good enough, though.

Original issue reported on code.google.com by collinw on 14 Apr 2009 at 11:37

Referenceleaks in LLVM mode

When running regrtest -R:: (which requires a pydebug build) with -L0 or
higher, some of the tests end up leaking references:

test_codecs leaked [27, 27, 27, 27] references, sum=108
test_copy leaked [2, 2, 2, 2] references, sum=8
test_datetime leaked [2, 2, 2, 2] references, sum=8
test_decimal leaked [2, 2, 2, 2] references, sum=8
test_difflib leaked [56, 56, 56, 56] references, sum=224
test_generators leaked [296, 296, 296, 296] references, sum=1184
test_grammar leaked [3, 3, 3, 3] references, sum=12
test_io leaked [2, 2, 2, 2] references, sum=8
test_itertools leaked [26, 26, 26, 26] references, sum=104
test_lib2to3 leaked [22, 22, 22, 22] references, sum=88

These tests do not leak without -L. Considering test_llvm does not leak, I
suspect a leak in an error path that test_llvm fails to test.

Original issue reported on code.google.com by [email protected] on 20 May 2009 at 1:12

Cannot pickle cPickle.Pickler objects

In Python 2.6, you can pickle cPickle.Pickler and Unpickler items. I seem
to have broken that in Unladen Swallow.

Original issue reported on code.google.com by collinw on 22 Apr 2009 at 4:42

Speed up regular expressions

CPython's regex implementation is slower than it could be. There are a lot
of techniques we could use to make things faster:

- JIT compile regexes. V8 and SquirrelFish Extreme have had good luck with
this approach.
- Thompson NFAs are much faster for some regexes, but don't support the
full range of Python regex syntax. This would necessitate a multi-engine
system that picks the fastest engine for the given regex.
- Apply VM optimizations from the main Python eval loop to the regex eval
loop. This isn't likely to give us the speed we want, but may help in a
multi-engine approach if one of those engines is the existing regex VM.


Longer-form, with links:
http://code.google.com/p/unladen-swallow/wiki/ProjectPlan#Regular_Expressions

Original issue reported on code.google.com by collinw on 14 Apr 2009 at 11:30

Python compiles dead code

CPython and Unladen Swallow both currently emit bytecode/machine code for
code following a return, raise, break or continue statement, like so:

def foo():
  return 5
  for i in range(4):
    print i

Eliminating this code isn't critical (it isn't very common, and LLVM is
smart enough to eliminate it for us), but the need to support this has
spread into the LLVM compiler, which has to allow for this with special
deadcode blocks. This is ugly and should be fixed.

This could be done in a couple of ways: the dead code could be pruned from
the AST (or not even make it into the AST), or could be ignored by the
bytecode compiler.

Original issue reported on code.google.com by collinw on 13 May 2009 at 10:40

Some tests are flaky

test_bsddb3 fails sometimes, passes other times, resulting in spurious
failure reports. There may be patches upstream to make it more stable.

Original issue reported on code.google.com by collinw on 15 May 2009 at 9:49

Add a script to install LLVM with the right options

Now that we have the --with-llvm option, we should have a script in
Util/llvm to configure, make and install LLVM with the proper options. We
pass a bunch of options from Python's ./configure to LLVM's ./configure,
and those should be automated into a simple script. Otherwise, you'll have
to dig through Python's ./configure to find the right options every time,
and that's a pain in the ass.

This will make --with-llvm a lot easier to use.

Original issue reported on code.google.com by collinw on 29 May 2009 at 4:47

Share code objects between identical functions

This idea is more speculative: every "def foo(): pass" function has the
same bytecode. It might reduce memory usage if we should share the
implementation of these functions between function definitions. This could
be done by hashing the function's parameters and bytecode.

Issues:
- Hashing the AST might be better than hashing the bytecode.
- If two functions have the same bytecode but different parameter names,
they probably can't take advantage of this. In the example above, "def
foo(): pass" and "def bar(a): pass" can't share an implementation

I'd be interested to know, for an application like 2to3 or Django's test
suite, how many functions are really the same (modulo their name).

Original issue reported on code.google.com by collinw on 6 Jun 2009 at 12:07

Add fuzz-based testing

As we do deeper and deeper surgery on the compiler, I'd like to have a
fuzzer or some other kind of randomized testing to look for corner cases in
our implementation.

Before writing our own, we should try to reuse Fusil
(http://fusil.hachoir.org/svn/trunk/), which has been shown to find bugs in
CPython already. Other Python implementations may already have something
like this; if so, we should reuse that.

Eventually, this would become part of the continuous build, forever
searching the space of random Python programs for crashes.

Original issue reported on code.google.com by collinw on 14 Apr 2009 at 11:46

Review opcodes_to_functions branch

Purpose of code changes on this branch: simplify eval loop by converting a
number of infrequent to builtin functions. These builtins are prefixed with
#@, e.g., #@import_star.

This shows a 1% improvement in 2to3 performance on all tested platforms.
This should also contribute positively to the vmgen branch.


After the review, I'll merge this branch into:
/trunk


Original issue reported on code.google.com by collinw on 5 Jan 2009 at 7:34

Review only_why_reports_exceptions change-branch

Purpose of code changes on this branch: To investigate removing all ways of
reporting errors except for 'why'.

2to3 seems to be slightly faster with this patch; pybench seems to be
slightly slower. We'll need to measure this on real hardware and compilers,
rather than my laptop.

There are a couple bugs in EvalFrameEx where it doesn't signal an error
when it should. Since those are functionality changes, I'll do them in a
separate change.

After the review, I'll merge this branch into:
/trunk

... Does code know how to diff branches? We'll see.

Original issue reported on code.google.com by [email protected] on 8 Dec 2008 at 12:17

Re-make-ing rebuilds too much

Running make twice in a row takes forever, even if you didn't change
anything: make wants to rebuild Python/llvm_inline_functions.bc, which
cascades into causing the python binary and libpython to both be relinked.

A simple no-op make takes almost 30 seconds, all to do nothing.

Original issue reported on code.google.com by collinw on 28 May 2009 at 4:07

Teach the JIT to recompile things

We currently use llvm::ExecutionEngine::getPointerToFunction(func) to
translate LLVM IR into machine code. getPointerToFunction caches the
returned machine code for each input function, and will not regenerate it
even if the function has changed. This means we can't re-optimize a
function after calling it through LLVM.

ExecutionEngine::recompileAndRelinkFunction(func) is available to force
LLVM to regenerate the machine code, but it overwrites the original machine
code with a jump to the new block, which means we can't use it while any
stack frame in any thread is still executing the code.

More information on fixing this is at
http://wiki.llvm.org/Provide_more_control_over_and_access_to_JIT%27s_output

Original issue reported on code.google.com by [email protected] on 28 May 2009 at 10:47

Add a ./configure flag to disable LLVM

In order to maintain support for smaller platforms like cell phones, we
should include a ./configure flag to disable LLVM entirely. Since we'll be
keeping around the eval loop, this should be pretty straightforward to
implement.

Original issue reported on code.google.com by collinw on 13 Apr 2009 at 10:33

Please review the faster-pickling branch

Purpose of code changes on this branch: speed up cPickle.

Pickle (complex):
Min: 1.023 -> 0.409: 150.36% faster
Avg: 1.053 -> 0.410: 157.17% faster
Significant (t=1102.029662, a=0.95)

Pickle (simple):
Min: 1.223 -> 0.868: 40.83% faster
Avg: 1.229 -> 0.876: 40.20% faster
Significant (t=695.483070, a=0.95)

Unpickle (complex):
Min: 0.738 -> 0.536: 37.71% faster
Avg: 0.746 -> 0.547: 36.24% faster
Significant (t=122.112665, a=0.95)

Unpickle (simple):
Min: 0.756 -> 0.486: 55.60% faster
Avg: 0.774 -> 0.493: 56.91% faster
Significant (t=331.578243, a=0.95)


When reviewing my code changes, please focus on: anything stupid I did,
style issues, things that might block merger back to mainline.

Let me know if you'd rather review this on Rietveld.


After the review, I'll merge this branch into:
/trunk



Original issue reported on code.google.com by collinw on 28 Feb 2009 at 9:49

Make perf.py --track_memory work on other platforms

Currently, perf.py's --track_memory option works by reading Linux 2.6 smaps
files from /proc/. This obviously doesn't work on non-Linux platforms, or
even Linux platforms pre-2.6.16.

Darwin is the most important of the currently-unsupported platforms.
/usr/bin/time -l (lowercase ell) on Darwin will provide the maximum rss for
the process.

Original issue reported on code.google.com by collinw on 27 May 2009 at 2:52

TSC support is broken.

Building with TSC (the WITH_TSC define, enabled by --with-tsc) which
instruments CPython with CPU counters, is currently broken (at least by the
vmgen patch, which does not include the right #ifdefs in call_function and
the CALL_FUNCTION_VAR_KW opcode implementation.)

Original issue reported on code.google.com by [email protected] on 14 Apr 2009 at 1:34

compilation fails with IBM xlc in ceval

In ceval.c, compilation fails with IBM's xlc at the following point:

Python/ceval.c-854-     static Opcode labels[] = {
Python/ceval.c:855:#include "ceval-labels.i"
Python/ceval.c-856-     };

with a message for each entry in ceval-labels.i that is of the form:

"Include/ceval-labels.i", line 1.1: 1506-221 (S) Initializer must be a
valid constant expression. 

I think it's actually just upset about taking the address of a label,
something gcc complains about if you try to compile with -pedantic. It's
not a showstopper as it is still possible to compile ceval with gcc and
finish the build with xlc.

Original issue reported on code.google.com by [email protected] on 27 Apr 2009 at 3:45

Huge expressions are really slow to compile under LLVM

Lib/test/test_compile.py takes 1.7s for python2.6, 33s for trunk unladen
swallow (which emits some LLVM bitcode), and 12.5min for the llvm-working
branch (which emits full LLVM bitcode). The problem is test_extended_arg
which emits expressions containing 2500 subtractions.

Original issue reported on code.google.com by [email protected] on 30 Mar 2009 at 10:36

Offload JIT compilation to secondary threads

Execution should not block on compiling a function with LLVM or
reoptimizing it with new data. We should send these work units to separate
worker threads, allowing the main threads to carry on unimpeded.

The first implementation could/should probably just be a simple FIFO queue
of work items with a single worker thread. We can add heuristics and
additional worker threads as the data warrants.

Original issue reported on code.google.com by collinw on 28 May 2009 at 9:57

Add fine-grained JIT command line options

We should have a --jit option (approximate) that controls when and how
often the reoptimizer will run.

--jit=once will compile a function as heavily as possible the first time
it's found hot, then ignore it thereafter. This may mean that bad
predictions can't be corrected; we'll have to see.

--jit=never disables the LLVM integration, forcing all code to run through
the bytecode interpreter.

--jit=everything is equivalent to the current -L[012] options: all
functions are compiled to machine code when they are defined. As functions
are found to be hotter and hotter, they may be reoptimized.

The -O option should be extended to take numeric arguments the way gcc
does. -O[0123] will control the initial optimization used by --jit=once and
--jit=everything.

I'm open to debate on what other options --jit should support. Should
--jit=everything disable reoptimization?

Original issue reported on code.google.com by collinw on 30 May 2009 at 1:35

Make the stackpointer not escape

CALL_FUNCTION, CALL_FUNCTION_VAR_KW and UNPACK_SEQUENCE make the
stackpointer escape LLVM's pervue because their implementations (in C
functions) manipulate the stack directly. Getting rid of the direct
manipulation would allow LLVM to optimize more stack operations (and make
it easier to move away from the stack machine.)

Original issue reported on code.google.com by [email protected] on 21 Apr 2009 at 10:41

Need support for debugging LLVM-generated machine code

We currently don't have a good way of debugging the machine code that comes
out of LLVM's JIT compiler.

To the best of my knowledge, LLVM doesn't emit debug information for JITted
code. Even if it did, there's no way to tell gdb to read this information.
However, there is support in the LLVM and gdb communities for fixing both
of these issues.

References:
- http://lists.cs.uiuc.edu/pipermail/llvmdev/2009-March/021255.html
- http://lists.cs.uiuc.edu/pipermail/llvmdev/2009-April/021421.html
- http://lists.cs.uiuc.edu/pipermail/llvmdev/2009-April/021424.html
- http://wiki.llvm.org/HowTo:_Tell_GDB_about_JITted_code

Original issue reported on code.google.com by collinw on 27 May 2009 at 9:42

Turn BUILD_TUPLE/BUILD_LIST into memcpy operations.

The core of BUILD_TUPLE/BUILD_LIST (and also other opcodes that build
tuples or lists out of stack items, like CALL_FUNCTION might do) is
essentially a memcpy() from the stack to the body of the (newly created)
tuple/list object. LLVM doesn't seem to optimize this by itself, so we
should turn this into an @llvm.memcpy call.

Original issue reported on code.google.com by [email protected] on 21 Apr 2009 at 10:43

Consider use of CMake for build system

CMake would allow developers to choose their build system/IDE they use to 
build unladen-swallow. What say you?

Is anyone on the project familiar with CMake?

Original issue reported on code.google.com by [email protected] on 27 Mar 2009 at 9:22

LLVM-generated functions need to handle signals

Currently, LLVM-generated functions don't ever check for pending signals to
handle the way the interpreter does. Among other problems, this makes it
impossible to KeyboardInterrupt Unladen Swallow when running with -L[012].

evlogimenos has agreed to work on this (and thread switching, since they're
related).

Original issue reported on code.google.com by collinw on 28 May 2009 at 12:21

llc produces unused variables

llc produces some unused variables in its output (at least for
initial_llvm_module.cc):

Python/initial_llvm_module.cc:662: warning: unused variable ‘PointerTy_80’
Python/initial_llvm_module.cc:664: warning: unused variable ‘PointerTy_81’
Python/initial_llvm_module.cc:675: warning: unused variable ‘PointerTy_82’
Python/initial_llvm_module.cc:693: warning: unused variable ‘PointerTy_86’
Python/initial_llvm_module.cc:704: warning: unused variable ‘PointerTy_88’
Python/initial_llvm_module.cc:716: warning: unused variable ‘PointerTy_90’
Python/initial_llvm_module.cc:724: warning: unused variable ‘PointerTy_92’

We silence the warnings for now, but we should see if we can fix llc instead.

Original issue reported on code.google.com by [email protected] on 12 May 2009 at 9:06

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.