MerryMage
ebe44dab7a
stack_layout: Ignore warning C4324 for StackLayout
...
We expect the structure to be padded
2021-05-04 16:26:28 +01:00
MerryMage
c5f5c1d40f
frontend: Standardize emitted IR for exception raising
2021-05-04 16:14:26 +01:00
MerryMage
3b2c6afdc2
backend/x64: Move cycles_remaining and cycles_to_run from JitState to stack
2021-05-04 14:40:13 +01:00
MerryMage
d6592c7142
Remove ExceptionalExit hack
2021-05-04 14:40:13 +01:00
MerryMage
030ff82ba8
backend/x64: Move check_bit from JitState to stack
2021-05-04 14:40:13 +01:00
MerryMage
a1950d1d2f
backend/x64: Move save_host_MXCSR from JitState to stack
2021-05-04 14:19:05 +01:00
MerryMage
ddbc50cee0
backend/x64: Move spill from JitState onto the stack
2021-05-04 14:18:44 +01:00
MerryMage
795b9bea9a
Remove ChangeProcessorID hack
...
* No library users require this hack any longer.
2021-05-01 20:33:14 +01:00
MerryMage
6759942b56
emit_x64_data_processing: Correct bug in ArithmeticShiftRight64
...
This branch of this implementation is unused, and thus has not been tested.
2021-04-27 18:51:23 +01:00
MerryMage
68088c277c
emit_x64_data_processing: Reduce codesize of RotateRight32 for carry case
2021-04-26 21:57:22 +01:00
MerryMage
f77b98de36
emit_x64_data_processing: Reduce codesize of ArithmeticShiftRight32 for carry case
2021-04-26 21:57:08 +01:00
MerryMage
a2a687f208
emit_x64_data_processing: Reduce codesize of LogicalShiftRight32 for carry case
2021-04-26 21:56:42 +01:00
MerryMage
58ff457339
emit_x64_data_processing: Reduce codesize of LogicalShiftLeft32 for carry case
2021-04-26 21:35:06 +01:00
MerryMage
510862e50c
backend/x64: Change V flag testing to cmp instead of add
...
Prefer a non-destructive read to a destructive read.
2021-04-26 00:26:28 +01:00
MerryMage
3f74a839b9
emit_x64_floating_point: Optimize 64-bit EmitFPRSqrtEstimate
2021-04-26 00:26:28 +01:00
MerryMage
7bc9e36ed7
emit_x64_floating_point: Optimize 32-bit EmitFPRSqrtEstimate
2021-04-26 00:26:28 +01:00
MerryMage
e19f898aa2
ir: Reorganize to new top level folder
2021-04-21 22:22:07 +01:00
MerryMage
5bec200c36
block_of_code: Add santiy check that far_code_offset < total_code_size
2021-04-21 18:26:26 +01:00
MerryMage
08ed8b4a11
abi: Consolodate ABI information into one place
2021-04-21 18:25:04 +01:00
MerryMage
b2a4da5e65
block_of_code: Correct SpaceRemaining
2021-04-11 15:37:25 +01:00
merry
71491c0a4a
Merge pull request #596 from degasus/fix_perf_register
...
backend/x64: Fix PerfMapRegister usages.
2021-04-05 21:43:10 +01:00
MerryMage
9ab83180db
{a32,a64}_interface: Clear exclusive state during an exceptional exit
...
This is normally done by the ERET instruction during a service call.
2021-04-02 19:33:28 +01:00
MerryMage
c788bcdf17
block_of_code: Enable configuration of code cache sizes
2021-04-02 11:17:46 +01:00
Markus Wick
b2acdec8cb
backend/x64: Fix PerfMapRegister usages.
...
Both the far code and fast_dispatch_table_lookup were missing.
2021-04-02 00:17:07 +02:00
bunnei
1819c2183f
backend: x64: block_of_code: Double the total code size. ( #595 )
...
- The current limits are being hit in yuzu with some games (e.g. newer updates of BotW and SSBU).
- Increasing this fixes slow-downs in these games due to code being recompiled.
2021-04-01 20:53:49 +01:00
MerryMage
c4cff773b9
emit_x64_vector_floating_point: Avoid checking inputs for NaNs for three-ops where able
2021-03-28 21:54:36 +01:00
Wunk
e06933f123
block_of_code: Allow Fast BMI2 paths on Zen 3 ( #593 )
...
BMI2 instructions such as `pdep` and `pext` have been
known to be incredibly slow on AMD. But on Zen3
and newer, the performance of these instructions
are now much greater, but previous versions of AMD
architectures should still avoid BMI2.
On Zen 2, pdep/pext were 300 cycles. Now on Zen 3 it is 3 cycles.
This is a big enough improvement to allow BMI2 code to
be dispatched if available. The Zen 3 architecture is checked for
by detecting the family of the processor.
2021-03-27 21:36:51 +00:00
Merry
c28f13af97
emit_x64_vector: Bugfix for EmitVectorReverseBits on AVX-512: Do not reverse bytes without vector
2021-03-27 21:32:43 +00:00
Merry
4d33feb1fa
emit_x64_vector: Bugfix for EmitVectorLogicalShiftRight8: shift_amount can be >= 8
2021-03-27 21:32:07 +00:00
Merry
91337788ee
emit_x64_vector: Bugfix for EmitVectorLogicalShiftLeft8: shift_amount can be >= 8
2021-03-27 21:31:51 +00:00
Merry
dc37fe6e28
emit_x64_vector: Bugfix for ArithmeticShiftRightByte: shift_amount can be >= 8
2021-03-27 21:31:22 +00:00
MerryMage
f5dd7122a2
EmitFPVectorMulAdd: Correct optimization flag (Unsafe_UnfuseFMA -> Unsafe_InaccurateNaN)
2021-02-21 21:30:20 +00:00
emuplz
6d4333c78e
fixed data + instruction cache callbacks (w/ tests)
2021-02-17 20:38:08 +00:00
emuplz
8728444af8
added support for instruction ic ivau
2021-02-17 20:38:06 +00:00
MerryMage
62003a2d89
A32/ir_emitter: Implement UpdateUpperLocationDescriptor
2021-02-07 20:41:48 +00:00
MerryMage
f229a68aed
a32_emit_x64: Update upper_location_descriptor in BXWritePC based on final location
2021-02-07 20:41:48 +00:00
MerryMage
7e5ae6076a
A32: Add arch_version option
2021-02-07 12:13:14 +00:00
bunnei
de389968eb
A32: Add hook_isb option.
2021-01-28 20:47:39 -08:00
MerryMage
0f27368fda
A64: Add hook_isb option
2021-01-26 23:41:21 +00:00
MerryMage
3806284cbe
emit_x64{,_vector}_floating_point: Fix non-FMA execution
...
Avoid repeated calls to GetArgumentInfo
2021-01-02 20:40:32 +00:00
MerryMage
6023bcd8ad
emit_x64_data_processing: Fix signed/unsigned warning
2021-01-02 20:12:48 +00:00
MerryMage
c15917b350
backend/x64: Add further Unsafe_InaccurateNaN locations
2021-01-02 20:12:48 +00:00
MerryMage
f9ccf91b94
Add Unsafe_InaccurateNaN optimization to all fma instructions
2021-01-02 17:22:50 +00:00
MerryMage
8c4463a0c1
emit_x64_data_processing: EmitSub: Use cmp where possible
2021-01-01 19:37:47 +00:00
MerryMage
e926f0b393
emit_x64_data_processing: Minor optimization for immediates in EmitSub
2021-01-01 13:35:01 +00:00
MerryMage
eeeafaf5fb
Introduce Unsafe_InaccurateNaN
2021-01-01 07:18:05 +00:00
ReinUsesLisp
4a9a0d07f7
backend/{a32,a64}_emit_x64: Add config entry to mask page table pointers
...
Add config entry to mask out the lower bits in page table pointers.
This is intended to allow users of Dynarmic to pack small integers
inside pointers and update the pair atomically without locks.
These lower bits can be masked out due to the expected alignment in
pointers inside the page table.
For the given usage, using AND on the pointer acts the same way as a
TEST instruction. That said when the mask value is zero, TEST is still
emitted to keep the same behavior.
2020-12-29 19:16:46 +00:00
MerryMage
b47e5ea1e1
emit_x64_data_processing: Use BMI2 shifts where possible
2020-12-28 22:42:51 +00:00
Wunk
3e932ca55d
emit_x64_vector: Fix ArithmeticShiftRightByte zero_extend constant
...
Should be shifting in _bytes_ of `0x80`. Not bits.
2020-11-09 09:47:51 -08:00
Wunkolo
ec52922dae
emit_x64_vector: Use explicit 64-bit mask constant
...
Exchange `~0ull` with `0xFFFFFFFFFFFFFFFF` when generating
the `zero_extend` constant.
2020-11-07 15:29:12 +00:00