Lioncash
|
4507627905
|
emit_x64_vector: Provide AVX path for EmitVectorMinU64()
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
fd49a62b06
|
emit_x64_vector: Provide AVX path for EmitVectorMinS64()
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
770723f449
|
emit_x64_vector: Provide AVX path for EmitVectorMaxU64()
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
8fb90c0cf1
|
emit_x64_vector: Provide AVX path for EmitVectorMaxS64()
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
2cac6ad129
|
emit_x64_vector: Simplify EmitVectorLogicalLeftShift8()
Similar to EmitVectorLogicalRightShift8(), we can determine a mask ahead
of time and just and the results of a halfword left shift.
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
135107279d
|
emit_x64_vector: Simplify EmitVectorLogicalShiftRight8()
We can generate the mask and AND it against the result of a halfword
shift instead of looping.
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
2952b46b16
|
emit_x64_vector: Amend value definition in SSE 4.1 path for EmitVectorSignExtend16()
We should be defining the value after the results have been calculated
to be consistent with the rest of the code.
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
fda19095ea
|
emit_x64_vector: Remove fallback in EmitVectorSignExtend64()
This is fairly trivial to do manually.
|
2020-04-22 20:53:46 +01:00 |
|
Lioncash
|
39593fcd26
|
emit_x64_vector: Remove fallback for EmitVectorSignExtend32()
We can just do the extension manually, which gets rid of the need to
fall back here.
|
2020-04-22 20:53:46 +01:00 |
|
BreadFish64
|
2a65442933
|
Backend: Create "backend" folder
similar to the "frontend" folder
|
2020-04-22 20:53:46 +01:00 |
|