Changes to make rtc/cts flow control work...
This does not turn on the builtin hardware flow control on the SoC's usart
device, because that doesn't work on uart1 due to a chip erratum (they
forgot to wire up pin PA21 to RTS0 internally). Instead it uses the
hardware flow control logic where the tty layer calls the driver to assert
and de-assert the flow control lines as needed. This prevents overruns at
the tty layer (app doesn't read fast enough), but does nothing for overruns
at the driver layer (interrupts not serviced fast enough).
To work around the wiring problem with RTS0, the driver reassigns that pin
as a GPIO and controls it manually. It only does so if given permission via
hint.uart.1.use_rts0_workaround=1, to prevent accidentally driving the pin
if uart1 is used without flow control (because something not related to
serial IO could be wired to that pin).
In addition to the RTS0 workaround, driver changes were needed in the area
of reading the current set of DCE signals. A priming read is now done at
attach() time, and the interrupt routine now sets SER_INT_SIGCHG when any
of the DCE signals change. Without these changes, nothing could ever be
transmitted, because the tty layer thought CTS was de-asserted (when in fact
we had just never read the status register, and the hwsig variable was
init'd to CTS de-asserted).
Changes to support bulk high-speed (230kbps and higher) data reception...
Allow the receive fifo size to be tuned with hint.uart.<dev>.fifo_bytes.
For high speed receive, a fifo size of 1024 works well. The default is
still 128 bytes if no hint is provided. Using a value larger than 384
requires a change in dev/uart/uart_core.c to size the intermediate
buffer as MAX(384, 3*sc->sc_rxfifosize).
Recalculate the receive timeout whenever the baud rate changes. At low
baud rates (19.2kbps and below) the timeout is the number of bits in 2
characters. At higher speed it's calculated to be 500 microseconds
worth of bits. The idea is to compromise between being responsive in
interactive situations and not timing out prematurely during a brief
pause in bulk data flow. The old fixed timeout of 1.5 characters was
just 32 microseconds at 460kbps.
At interrupt time, check for receiver holding register overrun status
and set the corresponding status bit in the return value.
When handling a buffer overrun, get a single buffer emptied and handed
back to the hardware as quickly as possible, then deal with the second
buffer. This at least minimizes data loss compared to the old logic
that fully processed both buffers before restarting the hardware.
Rewrite the logic for handling buffers after a receive timeout. The
original author speculated in a comment that there may be a race with
high speed data. There was, although it was rare. The code now handles
all three possible scenarios on receive timeout: two empty buffers, one
empty and one partial buffer, or one full and one partial buffer.
Reviewed by: imp
add the ability for userland to be notified of changes on gpio pins via
a select(2)/read(2) interface.
Change the interrupt handler from filtered to threaded.
Because of the uiomove() calls in the new interface, change locking from
standard mutex to sx.
Add / restore the at91_gpio_high_z() function.
Reviewed by: imp (long ago)
of bits, not just a 0/1 indicating whether any of the masked bits are on.
This is compatible with the single in-tree caller of this function right now
(at91_vbus_poll() in dev/usb/controller/at91dci_atemelarm.c).
and that can drive someone crazy. While m_get2() is young and not
documented yet, change its order of arguments to match m_getm2().
Sorry for churn, but better now than later.
processors, either on reboot or after power down with battery backup.
However, the AT91RM9200 RTC always resets on reboot making it just
about useless at the moment (if we support a low-power mode or an
extended sleep mode, it might become useful).
Submitted by: Ian Lepore
On single core devices set_stackptrs is only ever called with cpu = 0 in
initarm and will be identical to the existing function. On SMP this needs
to be implemented for sys/arm/mp_machdep.c, but the implementations are
identical for each SoC.
o Disable multi-block operations: they sometimes fail.
o Don't use the PROOF bits yet: they hang the system hard.
o Disable the the multi-block operations for !rm9200, but it
still doesn't help.
o Fix writing < 12 bytes errata to actually work.
o Enable, for the moment, reporting extra bytes soaked up.
restructuring of the driver. I've tried to preserve the other silicon
workarounds that we've added over the years, but haven't had a chance
to extensively test on other hardware. On my AT91RM9200 with 30MHz/1
wire/64 block transfers, I've been able to go from ~.66MB/s to
2.25MB/s in the simple tests I performed, almost a 3.5x improvement.
This cuts the boot time almost in half when everything else goes
right (timed from rtc message to login: prompt).
PR: 155214
Submitted by: Ian Lapore
explicltly enable that. The driver chose to use 60MHz / 2 (30MHz)
most of the time rather than 60MHz / 4 (15MHz) based on the Linux
driver of the time. This pushes the spec a little in order to not
suffer the penalty of running at 15MHz. However, when other bus
masters are active in the system, and the user tries 4-wire mode, the
internal bus arbitration would fail with data loss as a result.
# Comments from PR were reworked to reflect my historical perspective
PR: 155214 (partial)
Submitted by: Ian Lepore
Cummulative patch of changes that are not vendor-specific:
- ARMv6 and ARMv7 architecture support
- ARM SMP support
- VFP/Neon support
- ARM Generic Interrupt Controller driver
- Simplification of startup code for all platforms