Since 92e4f02 moved logging logic into store_slow_path function it has
been logging stores even if actually_store parameter is false. Because
of that logging is broken for all atomic instructions. Function "amo" calls
store_slow_path with nullptr argument and actually_store equal to false
while callee uses reg_from_bytes independently from actually_store value
All of that causes dereferencing of nullptr. This commit logs memory
access only if it actually happened
Previously cache block size had to be initialized via special
set_chache_blocksz setter function and was uninitialized if you didn't
call it. In this commit I move initialization of the block size to the
mmu_t's constructor. The configuration of this member goes through cfg_t
struct.
These are the changes:
- Zvkg (vghsh.vv, vgmul.vv)
- vl must be a multiple of EGS=4. (spec p.13)
- Check alignment of vd, vs1, vs2 with lmul
- Zvksh (vsm3c.vi, vsm3me.vv)
- vstart, vl must be multiple of EGS=4 (spec p.17)
- Check alignment of vd, vs1, vs2 with lmul
- Zvksed (vsm4k.[vi,vs,vv])
- vstart, vl must be multiple of EGS=4 (spec p.16)
- Check alignment of vd, vs1, vs2 with lmul
- For vsm4r.vs, check overlap between vs2 and vd (spec p.7)
- Zvbb (vwsll.[vv,vx,vi])
- Check alignment of vd, vs1, vs2 with lmul (for widening instructions)
- Check overlap between vs2 and vd
- Zvkned
- vstart, vl must be multiple of EGS=4 (spec p.14)
- Check alignment of vd, vs1, vs2 with lmul
- For vaes*.vs, check overlap between vs2 and vd (spec p.7)
- Zvknh
- Check alignment of vd, vs1, vs2 with lmul
Prior to this commit when calling processor_t's constructor xlen was
explicitly initialized with zero (hence get_xlen and get_const_xlen both
returned zero). Later the value of xlen was being corrected in the first reset
call. Which made it impossible to use xlen on custom extension registration
(before first reset). This patch initializes xlen with the correct value
from the start.
The original decoder in spike doesn't support extracting fields such as
funct7 or opcode. This commit adds support for usage in other projects.
Signed-off-by: Tianrui Wei <tianrui@tianruiwei.com>
This is a performance enhancement, because it prevents some pathological
conflict cases (e.g. aligned memcpy), but it also cleans up some aspects
of the code (e.g. ITLB refills don't interact with the DTLB).