We should implement a global array bounds check elimination phase. ABCD prototype was removed from system in 2.2.1 because it was never finished and was out of date. But, some form of global bounds check elimination would be useful.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I have worked on a loop based bound and null check
elimination phase. It optimises loops of forms similar to:
for (int i=n; i<N;i++) {
g1 = null_check A;
g2 = bound_check A, i, g1;
A[i] = -1, g2;
}
into:
g3 = if ((n < N) && (N <= A.length)) {
for (int i=n; i<N;i++) {
A[i] = -1, g3;
}}
else {
for (int i=n; i<N;i++) {
g1 = null_check A;
g2 = bound_check A, i, g1;
A[i] = -1, g2;
}}
this is different to how global array bounds check
elimination works, and pi node analysis should be able to
remove parts of the generated code. I hope to analyse this
work in relation to global array bounds check elimination.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm confused by what the meaning of global array bound check
elimination means in this RFE in particular as we seem to do
some ABCD work in an opt 2 compilation. Does it mean that we
can't eliminate bound checks on arrays allocated outside of
a block of IR? But, for example:
we can eliminate the bound check and replace it with a
guard_move of g1 even if A was allocated outside of the
block of code.
I believe what this RFE is for is that we don't do
versioning when properties of the index are unknown (I know
there's a todo in the code for this). So for:
g2 = bounds_check x < A.length [.... uses of g2]
it would be nice to version the code so that we get:
g1 = if x >= 0 && x < A.length
g2 = g1 [.... uses of g2]
else
g2 = bounds_check x < A.length [.... uses of g2]
(NB the test for g1 could be replaced with an unsigned
compare). Given this doesn't reduce the number of tests
unless the guard is in a loop, what is the potential for
this to do more than loop versioning?
Side note: are we inserting Pi operators when we have final
static arrays or final arrays that are loaded from a final
static object?
Or am I just totally confused and Pi operator insertion has
been dropped altogether. I could swear otherwise, but from
talking to Mike he thinks we're just not doing any optimisation.
Ian
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I haven't looked to see exactly what is going on right now,
so this could be wrong. However, the basic story is that
the ABDC prototype in Jikes RVM was never actually correct.
It eliminated upper bounds (i<length), but="" didn't="" check="" lowerbounds="" (i="">=0).
Despite this, it was enabled for a little while, but we
decided to disable it since it wasn't technically correct.
It's possible that the pi node computation is still
happening, but I'm pretty sure that the ABCD code that would
then exploit the analysis results is disabled. There was
always the hope that someone would finish the ABCD
implementation, so we didn't completely rip out the
supporting analysis code when we disabled the transformation.
Take with a grain of salt, since I didn't trace through all
the code in svn head, but I'm fairly sure that is what the
situation is.
--dave
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
We should implement a global array bounds check elimination phase. ABCD prototype was removed from system in 2.2.1 because it was never finished and was out of date. But, some form of global bounds check elimination would be useful.
Logged In: YES
user_id=308843
I have worked on a loop based bound and null check
elimination phase. It optimises loops of forms similar to:
for (int i=n; i<N;i++) {
g1 = null_check A;
g2 = bound_check A, i, g1;
A[i] = -1, g2;
}
into:
g3 = if ((n < N) && (N <= A.length)) {
for (int i=n; i<N;i++) {
A[i] = -1, g3;
}}
else {
for (int i=n; i<N;i++) {
g1 = null_check A;
g2 = bound_check A, i, g1;
A[i] = -1, g2;
}}
this is different to how global array bounds check
elimination works, and pi node analysis should be able to
remove parts of the generated code. I hope to analyse this
work in relation to global array bounds check elimination.
Logged In: YES
user_id=1215435
Loop versioning like this would be pretty useful to have in
the system.
Obviously not a complete solution to global bounds check
elimination but probably catches most of the benefit.
Logged In: YES
user_id=308843
I'm confused by what the meaning of global array bound check
elimination means in this RFE in particular as we seem to do
some ABCD work in an opt 2 compilation. Does it mean that we
can't eliminate bound checks on arrays allocated outside of
a block of IR? But, for example:
g1 = if(x < A.length)
g2 = bounds_check x < A.length
we can eliminate the bound check and replace it with a
guard_move of g1 even if A was allocated outside of the
block of code.
I believe what this RFE is for is that we don't do
versioning when properties of the index are unknown (I know
there's a todo in the code for this). So for:
g2 = bounds_check x < A.length
[.... uses of g2]
it would be nice to version the code so that we get:
g1 = if x >= 0 && x < A.length
g2 = g1
[.... uses of g2]
else
g2 = bounds_check x < A.length
[.... uses of g2]
(NB the test for g1 could be replaced with an unsigned
compare). Given this doesn't reduce the number of tests
unless the guard is in a loop, what is the potential for
this to do more than loop versioning?
Side note: are we inserting Pi operators when we have final
static arrays or final arrays that are loaded from a final
static object?
Or am I just totally confused and Pi operator insertion has
been dropped altogether. I could swear otherwise, but from
talking to Mike he thinks we're just not doing any optimisation.
Ian
Logged In: YES
user_id=1215435
I haven't looked to see exactly what is going on right now,
so this could be wrong. However, the basic story is that
the ABDC prototype in Jikes RVM was never actually correct.
It eliminated upper bounds (i<length), but="" didn't="" check="" lowerbounds="" (i="">=0).
Despite this, it was enabled for a little while, but we
decided to disable it since it wasn't technically correct.
It's possible that the pi node computation is still
happening, but I'm pretty sure that the ABCD code that would
then exploit the analysis results is disabled. There was
always the hope that someone would finish the ABCD
implementation, so we didn't completely rip out the
supporting analysis code when we disabled the transformation.
Take with a grain of salt, since I didn't trace through all
the code in svn head, but I'm fairly sure that is what the
situation is.
--dave
Logged In: YES
user_id=1215435
Originator: YES
Moved to JIRA: http://jira.codehaus.org/browse/RVM-564