2015-05-24 06:55:12 +02:00
|
|
|
// Copyright 2008 Dolphin Emulator Project
|
2021-07-05 03:22:19 +02:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2008-12-08 06:25:12 +01:00
|
|
|
|
|
|
|
// DL facts:
|
|
|
|
// Ikaruga uses (nearly) NO display lists!
|
|
|
|
// Zelda WW uses TONS of display lists
|
|
|
|
// Zelda TP uses almost 100% display lists except menus (we like this!)
|
2009-08-09 13:03:58 +02:00
|
|
|
// Super Mario Galaxy has nearly all geometry and more than half of the state in DLs (great!)
|
2008-12-08 06:25:12 +01:00
|
|
|
|
|
|
|
// Note that it IS NOT GENERALLY POSSIBLE to precompile display lists! You can compile them as they
|
2019-12-05 14:18:12 +01:00
|
|
|
// are while interpreting them, and hope that the vertex format doesn't change, though, if you do
|
|
|
|
// it right when they are called. The reason is that the vertex format affects the sizes of the
|
|
|
|
// vertices.
|
2008-12-08 06:25:12 +01:00
|
|
|
|
2014-02-17 11:18:15 +01:00
|
|
|
#include "VideoCommon/OpcodeDecoding.h"
|
2019-12-05 14:47:22 +01:00
|
|
|
|
2014-09-08 03:06:58 +02:00
|
|
|
#include "Common/CommonTypes.h"
|
2016-01-17 22:54:31 +01:00
|
|
|
#include "Common/Logging/Log.h"
|
2014-02-17 11:18:15 +01:00
|
|
|
#include "Core/FifoPlayer/FifoRecorder.h"
|
|
|
|
#include "Core/HW/Memmap.h"
|
|
|
|
#include "VideoCommon/BPMemory.h"
|
2014-02-19 02:27:20 +01:00
|
|
|
#include "VideoCommon/CPMemory.h"
|
2014-02-17 11:18:15 +01:00
|
|
|
#include "VideoCommon/CommandProcessor.h"
|
|
|
|
#include "VideoCommon/DataReader.h"
|
|
|
|
#include "VideoCommon/Fifo.h"
|
|
|
|
#include "VideoCommon/Statistics.h"
|
|
|
|
#include "VideoCommon/VertexLoaderManager.h"
|
2014-02-19 02:27:20 +01:00
|
|
|
#include "VideoCommon/XFMemory.h"
|
|
|
|
|
2016-01-24 07:29:44 +01:00
|
|
|
namespace OpcodeDecoder
|
|
|
|
{
|
2019-12-05 14:20:34 +01:00
|
|
|
namespace
|
|
|
|
{
|
|
|
|
bool s_is_fifo_error_seen = false;
|
2011-03-27 04:55:08 +02:00
|
|
|
|
2019-12-05 14:20:34 +01:00
|
|
|
u32 InterpretDisplayList(u32 address, u32 size)
|
2008-12-08 06:25:12 +01:00
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
u8* start_address;
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
|
2016-01-24 05:31:13 +01:00
|
|
|
if (Fifo::UseDeterministicGPUThread())
|
2019-12-05 14:05:00 +01:00
|
|
|
start_address = static_cast<u8*>(Fifo::PopFifoAuxBuffer(size));
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
else
|
2019-12-05 14:05:00 +01:00
|
|
|
start_address = Memory::GetPointer(address);
|
2008-12-08 06:25:12 +01:00
|
|
|
|
Refactor opcode decoding a bit to kill FifoCommandRunnable.
Separated out from my gpu-determinism branch by request. It's not a big
commit; I just like to write long commit messages.
The main reason to kill it is hopefully a slight performance improvement
from avoiding the double switch (especially in single core mode);
however, this also improves cycle calculation, as described below.
- FifoCommandRunnable is removed; in its stead, Decode returns the
number of cycles (which only matters for "sync" GPU mode), or 0 if there
was not enough data, and is also responsible for unknown opcode alerts.
Decode and DecodeSemiNop are almost identical, so the latter is replaced
with a skipped_frame parameter to Decode. Doesn't mean we can't improve
skipped_frame mode to do less work; if, at such a point, branching on it
has too much overhead (it certainly won't now), it can always be changed
to a template parameter.
- FifoCommandRunnable used a fixed, large cycle count for display lists,
regardless of the contents. Presumably the actual hardware's processing
time is mostly the processing time of whatever commands are in the list,
and with this change InterpretDisplayList can just return the list's
cycle count to be added to the total. (Since the calculation for this
is part of Decode, it didn't seem easy to split this change up.)
To facilitate this, Decode also gains an explicit 'end' parameter in
lieu of FifoCommandRunnable's call to GetVideoBufferEndPtr, which can
point to there or to the end of a display list (or elsewhere in
gpu-determinism, but that's another story). Also, as a small
optimization, InterpretDisplayList now calls OpcodeDecoder_Run rather
than having its own Decode loop, to allow Decode to be inlined (haven't
checked whether this actually happens though).
skipped_frame mode still does not traverse display lists and uses the
old fake value of 45 cycles. degasus has suggested that this hack is
not essential for performance and can be removed, but I want to separate
any potential performance impact of that from this commit.
2014-09-01 07:11:32 +02:00
|
|
|
u32 cycles = 0;
|
|
|
|
|
2011-01-31 02:28:32 +01:00
|
|
|
// Avoid the crash if Memory::GetPointer failed ..
|
2019-12-05 14:05:00 +01:00
|
|
|
if (start_address != nullptr)
|
2008-12-08 06:25:12 +01:00
|
|
|
{
|
|
|
|
// temporarily swap dl and non-dl (small "hack" for the stats)
|
2019-07-11 05:34:50 +02:00
|
|
|
g_stats.SwapDL();
|
2008-12-08 06:25:12 +01:00
|
|
|
|
2019-12-05 14:05:00 +01:00
|
|
|
Run(DataReader(start_address, start_address + size), &cycles, true);
|
2019-07-11 05:34:50 +02:00
|
|
|
INCSTAT(g_stats.this_frame.num_dlists_called);
|
2008-12-08 06:25:12 +01:00
|
|
|
|
|
|
|
// un-swap
|
2019-07-11 05:34:50 +02:00
|
|
|
g_stats.SwapDL();
|
2008-12-08 06:25:12 +01:00
|
|
|
}
|
|
|
|
|
Refactor opcode decoding a bit to kill FifoCommandRunnable.
Separated out from my gpu-determinism branch by request. It's not a big
commit; I just like to write long commit messages.
The main reason to kill it is hopefully a slight performance improvement
from avoiding the double switch (especially in single core mode);
however, this also improves cycle calculation, as described below.
- FifoCommandRunnable is removed; in its stead, Decode returns the
number of cycles (which only matters for "sync" GPU mode), or 0 if there
was not enough data, and is also responsible for unknown opcode alerts.
Decode and DecodeSemiNop are almost identical, so the latter is replaced
with a skipped_frame parameter to Decode. Doesn't mean we can't improve
skipped_frame mode to do less work; if, at such a point, branching on it
has too much overhead (it certainly won't now), it can always be changed
to a template parameter.
- FifoCommandRunnable used a fixed, large cycle count for display lists,
regardless of the contents. Presumably the actual hardware's processing
time is mostly the processing time of whatever commands are in the list,
and with this change InterpretDisplayList can just return the list's
cycle count to be added to the total. (Since the calculation for this
is part of Decode, it didn't seem easy to split this change up.)
To facilitate this, Decode also gains an explicit 'end' parameter in
lieu of FifoCommandRunnable's call to GetVideoBufferEndPtr, which can
point to there or to the end of a display list (or elsewhere in
gpu-determinism, but that's another story). Also, as a small
optimization, InterpretDisplayList now calls OpcodeDecoder_Run rather
than having its own Decode loop, to allow Decode to be inlined (haven't
checked whether this actually happens though).
skipped_frame mode still does not traverse display lists and uses the
old fake value of 45 cycles. degasus has suggested that this hack is
not essential for performance and can be removed, but I want to separate
any potential performance impact of that from this commit.
2014-09-01 07:11:32 +02:00
|
|
|
return cycles;
|
2008-12-08 06:25:12 +01:00
|
|
|
}
|
|
|
|
|
2019-12-05 14:20:34 +01:00
|
|
|
void InterpretDisplayListPreprocess(u32 address, u32 size)
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
u8* const start_address = Memory::GetPointer(address);
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
|
2019-12-05 14:05:00 +01:00
|
|
|
Fifo::PushFifoAuxBuffer(start_address, size);
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
|
2019-12-05 14:05:00 +01:00
|
|
|
if (start_address == nullptr)
|
|
|
|
return;
|
|
|
|
|
|
|
|
Run<true>(DataReader(start_address, start_address + size), nullptr, true);
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
}
|
2019-12-05 14:20:34 +01:00
|
|
|
} // Anonymous namespace
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
|
2019-12-05 14:11:52 +01:00
|
|
|
bool g_record_fifo_data = false;
|
|
|
|
|
2016-01-24 07:29:44 +01:00
|
|
|
void Init()
|
2014-11-27 23:53:11 +01:00
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
s_is_fifo_error_seen = false;
|
2014-11-27 23:53:11 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
template <bool is_preprocess>
|
2016-01-24 07:29:44 +01:00
|
|
|
u8* Run(DataReader src, u32* cycles, bool in_display_list)
|
2008-12-08 06:25:12 +01:00
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
u32 total_cycles = 0;
|
2019-12-05 14:24:47 +01:00
|
|
|
u8* opcode_start = nullptr;
|
|
|
|
|
|
|
|
const auto finish_up = [cycles, &opcode_start, &total_cycles] {
|
|
|
|
if (cycles != nullptr)
|
|
|
|
{
|
|
|
|
*cycles = total_cycles;
|
|
|
|
}
|
|
|
|
return opcode_start;
|
|
|
|
};
|
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
while (true)
|
2008-12-08 06:25:12 +01:00
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
opcode_start = src.GetPointer();
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
if (!src.size())
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2019-12-05 14:05:00 +01:00
|
|
|
const u8 cmd_byte = src.Read<u8>();
|
2014-11-27 23:53:11 +01:00
|
|
|
switch (cmd_byte)
|
2008-12-08 06:25:12 +01:00
|
|
|
{
|
2014-11-27 23:53:11 +01:00
|
|
|
case GX_NOP:
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += 6; // Hm, this means that we scan over nop streams pretty slowly...
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2015-03-13 13:25:15 +01:00
|
|
|
case GX_UNKNOWN_RESET:
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += 6; // Datel software uses this command
|
2020-11-14 04:33:26 +01:00
|
|
|
DEBUG_LOG_FMT(VIDEO, "GX Reset?: {:08x}", cmd_byte);
|
2015-03-13 13:25:15 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2015-03-13 13:25:15 +01:00
|
|
|
case GX_LOAD_CP_REG:
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
{
|
2014-11-27 23:53:11 +01:00
|
|
|
if (src.size() < 1 + 4)
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2019-12-05 14:05:00 +01:00
|
|
|
|
|
|
|
total_cycles += 12;
|
|
|
|
|
|
|
|
const u8 sub_cmd = src.Read<u8>();
|
|
|
|
const u32 value = src.Read<u32>();
|
2014-11-27 23:53:11 +01:00
|
|
|
LoadCPReg(sub_cmd, value, is_preprocess);
|
2019-12-05 14:49:05 +01:00
|
|
|
if constexpr (!is_preprocess)
|
2019-07-11 05:34:50 +02:00
|
|
|
INCSTAT(g_stats.this_frame.num_cp_loads);
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
}
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
case GX_LOAD_XF_REG:
|
2014-11-15 16:24:06 +01:00
|
|
|
{
|
2014-11-27 23:53:11 +01:00
|
|
|
if (src.size() < 4)
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2019-12-05 14:05:00 +01:00
|
|
|
|
|
|
|
const u32 cmd2 = src.Read<u32>();
|
2019-12-05 14:43:34 +01:00
|
|
|
const u32 transfer_size = ((cmd2 >> 16) & 15) + 1;
|
2014-11-27 23:53:11 +01:00
|
|
|
if (src.size() < transfer_size * sizeof(u32))
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2019-12-05 14:05:00 +01:00
|
|
|
|
|
|
|
total_cycles += 18 + 6 * transfer_size;
|
|
|
|
|
2019-12-05 14:49:05 +01:00
|
|
|
if constexpr (!is_preprocess)
|
2014-11-27 23:53:11 +01:00
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
const u32 xf_address = cmd2 & 0xFFFF;
|
2014-11-27 23:53:11 +01:00
|
|
|
LoadXFReg(transfer_size, xf_address, src);
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2019-07-11 05:34:50 +02:00
|
|
|
INCSTAT(g_stats.this_frame.num_xf_loads);
|
2014-11-27 23:53:11 +01:00
|
|
|
}
|
|
|
|
src.Skip<u32>(transfer_size);
|
2014-11-15 16:24:06 +01:00
|
|
|
}
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2019-12-05 14:24:47 +01:00
|
|
|
case GX_LOAD_INDX_A: // Used for position matrices
|
|
|
|
case GX_LOAD_INDX_B: // Used for normal matrices
|
|
|
|
case GX_LOAD_INDX_C: // Used for postmatrices
|
|
|
|
case GX_LOAD_INDX_D: // Used for lights
|
|
|
|
{
|
2014-11-27 23:53:11 +01:00
|
|
|
if (src.size() < 4)
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
|
|
|
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += 6;
|
2019-12-05 14:24:47 +01:00
|
|
|
|
|
|
|
// Map the command byte to its ref array.
|
|
|
|
// GX_LOAD_INDX_A (32) -> 0xC
|
|
|
|
// GX_LOAD_INDX_B (40) -> 0xD
|
|
|
|
// GX_LOAD_INDX_C (48) -> 0xE
|
|
|
|
// GX_LOAD_INDX_D (56) -> 0xF
|
|
|
|
const int ref_array = (cmd_byte / 8) + 8;
|
|
|
|
|
2019-12-05 14:49:05 +01:00
|
|
|
if constexpr (is_preprocess)
|
2019-12-05 14:24:47 +01:00
|
|
|
PreprocessIndexedXF(src.Read<u32>(), ref_array);
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
else
|
2019-12-05 14:24:47 +01:00
|
|
|
LoadIndexedXF(src.Read<u32>(), ref_array);
|
|
|
|
}
|
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
case GX_CMD_CALL_DL:
|
2014-11-15 16:24:06 +01:00
|
|
|
{
|
2014-11-27 23:53:11 +01:00
|
|
|
if (src.size() < 8)
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2019-12-05 14:05:00 +01:00
|
|
|
|
|
|
|
const u32 address = src.Read<u32>();
|
|
|
|
const u32 count = src.Read<u32>();
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
if (in_display_list)
|
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += 6;
|
2020-11-14 04:33:26 +01:00
|
|
|
INFO_LOG_FMT(VIDEO, "recursive display list detected");
|
2014-11-27 23:53:11 +01:00
|
|
|
}
|
2014-11-15 16:24:06 +01:00
|
|
|
else
|
2014-11-27 23:53:11 +01:00
|
|
|
{
|
2019-12-05 14:49:05 +01:00
|
|
|
if constexpr (is_preprocess)
|
2014-11-27 23:53:11 +01:00
|
|
|
InterpretDisplayListPreprocess(address, count);
|
|
|
|
else
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += 6 + InterpretDisplayList(address, count);
|
2014-11-15 16:24:06 +01:00
|
|
|
}
|
2016-06-24 10:43:46 +02:00
|
|
|
}
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
case GX_CMD_UNKNOWN_METRICS: // zelda 4 swords calls it and checks the metrics registers after
|
|
|
|
// that
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += 6;
|
2020-11-14 04:33:26 +01:00
|
|
|
DEBUG_LOG_FMT(VIDEO, "GX 0x44: {:08x}", cmd_byte);
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
case GX_CMD_INVL_VC: // Invalidate Vertex Cache
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += 6;
|
2020-11-14 04:33:26 +01:00
|
|
|
DEBUG_LOG_FMT(VIDEO, "Invalidate (vertex cache?)");
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2015-03-13 13:25:15 +01:00
|
|
|
case GX_LOAD_BP_REG:
|
2014-11-27 23:53:11 +01:00
|
|
|
// In skipped_frame case: We have to let BP writes through because they set
|
|
|
|
// tokens and stuff. TODO: Call a much simplified LoadBPReg instead.
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
{
|
2014-11-27 23:53:11 +01:00
|
|
|
if (src.size() < 4)
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2019-12-05 14:05:00 +01:00
|
|
|
|
|
|
|
total_cycles += 12;
|
|
|
|
|
|
|
|
const u32 bp_cmd = src.Read<u32>();
|
2019-12-05 14:49:05 +01:00
|
|
|
if constexpr (is_preprocess)
|
2014-11-27 23:53:11 +01:00
|
|
|
{
|
|
|
|
LoadBPRegPreprocess(bp_cmd);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
LoadBPReg(bp_cmd);
|
2019-07-11 05:34:50 +02:00
|
|
|
INCSTAT(g_stats.this_frame.num_bp_loads);
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
}
|
2016-06-24 10:43:46 +02:00
|
|
|
}
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
// draw primitives
|
|
|
|
default:
|
|
|
|
if ((cmd_byte & 0xC0) == 0x80)
|
Refactor opcode decoding a bit to kill FifoCommandRunnable.
Separated out from my gpu-determinism branch by request. It's not a big
commit; I just like to write long commit messages.
The main reason to kill it is hopefully a slight performance improvement
from avoiding the double switch (especially in single core mode);
however, this also improves cycle calculation, as described below.
- FifoCommandRunnable is removed; in its stead, Decode returns the
number of cycles (which only matters for "sync" GPU mode), or 0 if there
was not enough data, and is also responsible for unknown opcode alerts.
Decode and DecodeSemiNop are almost identical, so the latter is replaced
with a skipped_frame parameter to Decode. Doesn't mean we can't improve
skipped_frame mode to do less work; if, at such a point, branching on it
has too much overhead (it certainly won't now), it can always be changed
to a template parameter.
- FifoCommandRunnable used a fixed, large cycle count for display lists,
regardless of the contents. Presumably the actual hardware's processing
time is mostly the processing time of whatever commands are in the list,
and with this change InterpretDisplayList can just return the list's
cycle count to be added to the total. (Since the calculation for this
is part of Decode, it didn't seem easy to split this change up.)
To facilitate this, Decode also gains an explicit 'end' parameter in
lieu of FifoCommandRunnable's call to GetVideoBufferEndPtr, which can
point to there or to the end of a display list (or elsewhere in
gpu-determinism, but that's another story). Also, as a small
optimization, InterpretDisplayList now calls OpcodeDecoder_Run rather
than having its own Decode loop, to allow Decode to be inlined (haven't
checked whether this actually happens though).
skipped_frame mode still does not traverse display lists and uses the
old fake value of 45 cycles. degasus has suggested that this hack is
not essential for performance and can be removed, but I want to separate
any potential performance impact of that from this commit.
2014-09-01 07:11:32 +02:00
|
|
|
{
|
2014-11-27 23:53:11 +01:00
|
|
|
// load vertices
|
|
|
|
if (src.size() < 2)
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2019-12-05 14:05:00 +01:00
|
|
|
|
|
|
|
const u16 num_vertices = src.Read<u16>();
|
|
|
|
const int bytes = VertexLoaderManager::RunVertices(
|
2015-01-31 09:23:50 +01:00
|
|
|
cmd_byte & GX_VAT_MASK, // Vertex loader index (0 - 7)
|
2016-10-08 04:55:47 +02:00
|
|
|
(cmd_byte & GX_PRIMITIVE_MASK) >> GX_PRIMITIVE_SHIFT, num_vertices, src, is_preprocess);
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2015-01-31 09:23:50 +01:00
|
|
|
if (bytes < 0)
|
2019-12-05 14:24:47 +01:00
|
|
|
return finish_up();
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2015-01-31 09:23:50 +01:00
|
|
|
src.Skip(bytes);
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2015-01-27 23:37:17 +01:00
|
|
|
// 4 GPU ticks per vertex, 3 CPU ticks per GPU tick
|
2019-12-05 14:05:00 +01:00
|
|
|
total_cycles += num_vertices * 4 * 3 + 6;
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2019-12-05 14:05:00 +01:00
|
|
|
if (!s_is_fifo_error_seen)
|
|
|
|
CommandProcessor::HandleUnknownOpcode(cmd_byte, opcode_start, is_preprocess);
|
2020-11-14 04:33:26 +01:00
|
|
|
ERROR_LOG_FMT(VIDEO, "FIFO: Unknown Opcode({:#04x} @ {}, preprocessing = {})", cmd_byte,
|
|
|
|
fmt::ptr(opcode_start), is_preprocess ? "yes" : "no");
|
2019-12-05 14:05:00 +01:00
|
|
|
s_is_fifo_error_seen = true;
|
|
|
|
total_cycles += 1;
|
Refactor opcode decoding a bit to kill FifoCommandRunnable.
Separated out from my gpu-determinism branch by request. It's not a big
commit; I just like to write long commit messages.
The main reason to kill it is hopefully a slight performance improvement
from avoiding the double switch (especially in single core mode);
however, this also improves cycle calculation, as described below.
- FifoCommandRunnable is removed; in its stead, Decode returns the
number of cycles (which only matters for "sync" GPU mode), or 0 if there
was not enough data, and is also responsible for unknown opcode alerts.
Decode and DecodeSemiNop are almost identical, so the latter is replaced
with a skipped_frame parameter to Decode. Doesn't mean we can't improve
skipped_frame mode to do less work; if, at such a point, branching on it
has too much overhead (it certainly won't now), it can always be changed
to a template parameter.
- FifoCommandRunnable used a fixed, large cycle count for display lists,
regardless of the contents. Presumably the actual hardware's processing
time is mostly the processing time of whatever commands are in the list,
and with this change InterpretDisplayList can just return the list's
cycle count to be added to the total. (Since the calculation for this
is part of Decode, it didn't seem easy to split this change up.)
To facilitate this, Decode also gains an explicit 'end' parameter in
lieu of FifoCommandRunnable's call to GetVideoBufferEndPtr, which can
point to there or to the end of a display list (or elsewhere in
gpu-determinism, but that's another story). Also, as a small
optimization, InterpretDisplayList now calls OpcodeDecoder_Run rather
than having its own Decode loop, to allow Decode to be inlined (haven't
checked whether this actually happens though).
skipped_frame mode still does not traverse display lists and uses the
old fake value of 45 cycles. degasus has suggested that this hack is
not essential for performance and can be removed, but I want to separate
any potential performance impact of that from this commit.
2014-09-01 07:11:32 +02:00
|
|
|
}
|
2014-11-27 23:53:11 +01:00
|
|
|
break;
|
2008-12-08 06:25:12 +01:00
|
|
|
}
|
2016-06-24 10:43:46 +02:00
|
|
|
|
2014-11-27 23:53:11 +01:00
|
|
|
// Display lists get added directly into the FIFO stream
|
2019-12-05 14:49:05 +01:00
|
|
|
if constexpr (!is_preprocess)
|
Refactor opcode decoding a bit to kill FifoCommandRunnable.
Separated out from my gpu-determinism branch by request. It's not a big
commit; I just like to write long commit messages.
The main reason to kill it is hopefully a slight performance improvement
from avoiding the double switch (especially in single core mode);
however, this also improves cycle calculation, as described below.
- FifoCommandRunnable is removed; in its stead, Decode returns the
number of cycles (which only matters for "sync" GPU mode), or 0 if there
was not enough data, and is also responsible for unknown opcode alerts.
Decode and DecodeSemiNop are almost identical, so the latter is replaced
with a skipped_frame parameter to Decode. Doesn't mean we can't improve
skipped_frame mode to do less work; if, at such a point, branching on it
has too much overhead (it certainly won't now), it can always be changed
to a template parameter.
- FifoCommandRunnable used a fixed, large cycle count for display lists,
regardless of the contents. Presumably the actual hardware's processing
time is mostly the processing time of whatever commands are in the list,
and with this change InterpretDisplayList can just return the list's
cycle count to be added to the total. (Since the calculation for this
is part of Decode, it didn't seem easy to split this change up.)
To facilitate this, Decode also gains an explicit 'end' parameter in
lieu of FifoCommandRunnable's call to GetVideoBufferEndPtr, which can
point to there or to the end of a display list (or elsewhere in
gpu-determinism, but that's another story). Also, as a small
optimization, InterpretDisplayList now calls OpcodeDecoder_Run rather
than having its own Decode loop, to allow Decode to be inlined (haven't
checked whether this actually happens though).
skipped_frame mode still does not traverse display lists and uses the
old fake value of 45 cycles. degasus has suggested that this hack is
not essential for performance and can be removed, but I want to separate
any potential performance impact of that from this commit.
2014-09-01 07:11:32 +02:00
|
|
|
{
|
2019-12-05 14:49:05 +01:00
|
|
|
if (g_record_fifo_data && cmd_byte != GX_CMD_CALL_DL)
|
|
|
|
{
|
|
|
|
const u8* const opcode_end = src.GetPointer();
|
|
|
|
FifoRecorder::GetInstance().WriteGPCommand(opcode_start, u32(opcode_end - opcode_start));
|
|
|
|
}
|
Refactor opcode decoding a bit to kill FifoCommandRunnable.
Separated out from my gpu-determinism branch by request. It's not a big
commit; I just like to write long commit messages.
The main reason to kill it is hopefully a slight performance improvement
from avoiding the double switch (especially in single core mode);
however, this also improves cycle calculation, as described below.
- FifoCommandRunnable is removed; in its stead, Decode returns the
number of cycles (which only matters for "sync" GPU mode), or 0 if there
was not enough data, and is also responsible for unknown opcode alerts.
Decode and DecodeSemiNop are almost identical, so the latter is replaced
with a skipped_frame parameter to Decode. Doesn't mean we can't improve
skipped_frame mode to do less work; if, at such a point, branching on it
has too much overhead (it certainly won't now), it can always be changed
to a template parameter.
- FifoCommandRunnable used a fixed, large cycle count for display lists,
regardless of the contents. Presumably the actual hardware's processing
time is mostly the processing time of whatever commands are in the list,
and with this change InterpretDisplayList can just return the list's
cycle count to be added to the total. (Since the calculation for this
is part of Decode, it didn't seem easy to split this change up.)
To facilitate this, Decode also gains an explicit 'end' parameter in
lieu of FifoCommandRunnable's call to GetVideoBufferEndPtr, which can
point to there or to the end of a display list (or elsewhere in
gpu-determinism, but that's another story). Also, as a small
optimization, InterpretDisplayList now calls OpcodeDecoder_Run rather
than having its own Decode loop, to allow Decode to be inlined (haven't
checked whether this actually happens though).
skipped_frame mode still does not traverse display lists and uses the
old fake value of 45 cycles. degasus has suggested that this hack is
not essential for performance and can be removed, but I want to separate
any potential performance impact of that from this commit.
2014-09-01 07:11:32 +02:00
|
|
|
}
|
2014-11-26 22:12:54 +01:00
|
|
|
}
|
2009-11-15 23:26:39 +01:00
|
|
|
}
|
Add the 'desynced GPU thread' mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
2014-08-28 04:56:19 +02:00
|
|
|
|
2016-01-24 07:29:44 +01:00
|
|
|
template u8* Run<true>(DataReader src, u32* cycles, bool in_display_list);
|
|
|
|
template u8* Run<false>(DataReader src, u32* cycles, bool in_display_list);
|
|
|
|
|
|
|
|
} // namespace OpcodeDecoder
|