Hi all
I am having a program hang and suspect it is related to clearing and setting the interupt flag incorrectly.
As an example the code below shows how I have intercepted the keyboard interupt, to increment a flag in my main code.
Basically it prevents me from repeatedly checking for keyboard input.
I have the same scenario with the timer tick and touch screen.
At the same time I have an adc that raises IRQ10 2000 times per second.
If during the handling of the Keyboard interupt an IRQ10 occurs, it gets ignored.
But if I miss this IRQ, then I have to read it "manually" to allow it to keep raising IRQ10.
So if I use cli and sti when it is not required, (or wrong) and I dont know that an IRQ10 has been missed then
the whole thing grinds to a halt.
Therefore my question to the group is this.
If my keyboard handler clears and sets the interupt flag, and I then pass control to the old keyboard handler
what does it do with the interupt flag. Does it clear it again and when finished start it. ?
Looking at my code, does it even need to be cleared.?
.MODEL MEDIUM
.486
.CODE
;-------------------------
PUBLIC TSRKbd
OldHandler2 DWORD ?
VarblSeg2 DW ?
VarblOfs2 DW ?
;-------------------------
TSRKbd PROC FAR
;------------
PUSH BP
MOV BP,SP
;------------
push ds ;for return to basic
push cs ;
pop ds ;set to current CS
;------------
MOV BX,[BP+6] ;segment of basic variable
CMP BX,0 ;if zero then program is terminating
JZ Restore2 ;so restore old handler.
MOV VarblSeg2,BX ;
MOV BX,[BP+10] ;offset of basic variable
MOV VarblOfs2,BX ;
;-----------------------------------
mov ax, 3509h ;get function 35
int 21h ;get vector for keyboard
mov WORD PTR OldHandler2[0], bx ;save addr of original handler
mov WORD PTR OldHandler2[2], es
mov ax, 2509h ;function 25h
mov dx, OFFSET TsrKeys ;new handler addr
int 21h ;set vector with addr of new handler
;----------------------------------
jmp Dos_Exit
Restore2:
lds dx,OldHandler2
mov ax,2509h
int 21h
jmp Dos_Exit
TsrKeys PROC FAR
cli
;---------
PUSH DS
PUSH BX
;---------------
mov ds,VarblSeg2 ;set seg to Basic's variable
mov bx,VarblOfs2 ;set ofs " " "
inc dword ptr [bx] ;inc the variable
;---------------
POP BX
POP DS
;*********
sti
jmp cs:OldHandler2
TsrKeys ENDP
;---------------------------------------
Dos_Exit:
pop ds ;restore basic's DS
POP BP ;
ret 8 ;clean up
TSRKbd ENDP
END
I always spend a lot of time aligning my quotes and comments, but whenever I post them they turn to Sh*t. :'(
Regards
Dinosaur
Well, I'm not sure about your problem, but your code boxes are messed up. I think you forgot to put a / in front of the closing
Quote
. That should make your code display properly.
I don't think you need to worry about either CLI or STI since you are always calling the original keyboard handler after you pre-process the interrupt.
I think any problems that you may be seeing might be caused because your program is making a TSR request after it installs your handler. That is very important because your code would be replaced by the next program that gets loaded into memory by DOS unless you make a special request to terminate, but stay resident. I might be mistaken, but I don't think I saw that in your code anywhere. I can see that you have the procedures named to indicate that effect but without making the special call ... Dos just won't know!
This is going back a few months for me but I have included two small programs that let you see how many times the keyboard service interrupt is called. The first, TSRCOUNT, either installs or removes the keyboard interrupt service. The second, KEYCOUNT, returns the value of the counter which is incremented each time the interrupt is processed when TSRCOUNT is installed and, otherwise, returns a bogus value for the count if TSRCOUNT is not installed. The source is for the A86 assembler that is freely available on the net but I have also included the .com and .lst files so you can test it directly if you like. It's *always* a bad idea to run someone elses .com file directly but you could check them out with DEBUG and compare them with the .lst files to make sure you trust them.
Hope this helps a little bit. They are fairly simple and similar to what you are wanting to do so I thought I'd pass them along. The only other thing that I should mention is that your interrupt service might be called when other tasks are scheduled so you have to take care to remember the segment where the variable that you are incrementing lives. Otherwise, you could be touching variables in the wrong segment ... or causing GPF's!
[attachment deleted by admin]
Thanks Phil.
My code is always written with exclusivity in mind, something I didnt mention.
My program is in complete charge of the computer, nothing else will get loaded by dos
once I take control. The cpu board is actually part of a machine that we build.
Other than Command.com no other drivers are used, with the exeption of Unreal which allows me to addres memory linearly. (unsegmented)
Also, my one and only call is to replace the original handler addr with the new handler addr.
After that I dont call that routine again untill I quit the program to restore the original addr.
This process is repeated for the Timer Tick and Touch Screen.
I suspected that someone may pick up on the amount of interupts per second.
ie: 18 for TimerTick, 2000 for my adc etc.
Amazingly this 15 module program that is performing a huge number of tasks is only taking 47uSec per loop,
on a 133mhz 486.
I have actually solved the Hang problem by inserting CLI and STI in a memory clearing routine.
But I still dont understand the rules about using them.
Robert
QuoteI think you forgot to put a / in front of the closing Quote
On the old site we had a close quote button, so guess I am a slow learner.
Regards
Dinosaur
the CLI in the TSRKbd routine is redundant because when an interrupt handler in real-mode is called, interrupt flag is cleared already (interrupts disabled).
the STI in TSRKbd is kind of error, because the code in OldHandler2 can expect the IF being cleared, and in fact it is not.
So, are you saying that when an interupt is detected by DOS, it already clears the Interupt Flag,
before I even get to the handler ?
Therefore if I have a lot of work to do to handle the interupt, but want to finish my current task,
then the Interupt flag is cleared all this time.(thats not what I do, but just a theoretical question)
That still leaves the question,
In Real mode in DOS, when am I allowed /supposed to use cli /sti ?
In the past I have used it only when bit-bashing an adc read, where an interupt would stuff
up my bit timing.
Regards
AFAIK the BIOS IRQ1 handler will STI as soon as possible to avoid blocking IRQ0. This is normal practice for interrupt handlers. The handler must read the keystroke data from the keyboard controller and then essentially reprogram it in preparation for the next key, and the controller is relatively slow, so I would expect the service time to be relatively long. The attachment contains a program that is supposed to measure the time required to service IRQ1. The program uses system timer 2, which (assuming a 1,193,182Hz input clock) has a resolution of 0.838 microsecond and will overflow after 54,925 microseconds. Under Windows 2000 the timer overflows for all keystrokes other than an Enter immediately following a Pause/Break, for which the time is in the 54-55ms range. I am measuring this same range for all keystrokes on an old Pentium MMX system running under Windows 98 MS-DOS mode. I'm not certain that these times are correct, but I think it very likely that the service time would be longer than the 0.5ms period of your adc interrupt.
The only solution I can see would be to use a higher priority interrupt for your adc. IRQ0 would be one possibility, if available. I think the handler could probably separate the adc interrupts from the timer 0 interrupts by checking the status of timer 0. Another possibility, and the only possibility available to an ISA card, would be the NMI interrupt. When an ISA card asserts Channel Check (CHCHK#), if NMI interrupts are enabled (bit7 of I/O port 70h is clear) and recognition of Channel Check is enabled (bit3 of I/O port 61h is clear), a NMI interrupt is issued to the processor. The processor associates entry 2 in the interrupt vector table with the NMI. To verify that the NMI was triggered by Channel Check, the handler can check that bit6 of I/O port 61h is set.
In real address mode the processor responds to an interrupt by pushing the flags, clearing the interrupt, trap, and AC flags, pushing CS, pushing IP, and then loading CS and IP from the interrupt vector table. The IRET instruction pops the stack, leaving the flags in the state they were in before the interrupt occurred.
[attachment deleted by admin]
Sounds like the conclusion is that I dont need to cli if I am chaining to the existing interupt handler.
Michael, your timing statements have me confused and now doubting my speed calculations.
The keyboard handler usually does not get used more then 2 or 3 times a day.
The Timer tick every 55 msec and
the adc every 0.5 msec
I have chained to the timer tick to increment a variable.
Then I count the number of program loops that elapse between timer ticks.
Divide 54945.0549 / Loops = usec for each program loop. (including printing the result to the screen)
Typically this is 47 or 48 microSec per loop.?
My adc is actually connected to a Xilinx cpld, so for me to get the results
takes two isa bus accesses. @ 14 clk's each for say an 8mhz bus = 0.0000035 sec = 3.5 microsec's
So 2000 of these should take at least 7msec.??
Is that 14 clk's of the CPU at 133mhz or 14 clk's of isa bus speed.?
Regards
I see nothing wrong with your timing method, but your numbers seem to me to be a little off. In my references the clock frequency for the system timer is given as 1,193,182 or 1,193,180 or 1,193,200. Assuming an initial count of 0 for system timer 0, the timer would count 65,536 cycles per tick, and 65536 / 1,193,182 = 0.054925401 seconds = 54925.40 microseconds per tick.
I made a stupid error in my timing code that caused it to ignore the MSB of the count, so the results were meaningless. Using the corrected code, running on a 500 MHz P3 under Windows 2000, the timer overflows for all keystrokes other than an Enter immediately following a Pause/Break, for which the time is ~100 microseconds. Running on a 166 MHz PMMX under Windows 98 MS-DOS mode I get 110-125 microseconds for most of the keys, and ~8000 (!?) for the toggle keys. So if these numbers are correct, I would guess that a 133MHz 486 would be able to handle most keystrokes in something like 200 microseconds. I have corrected the original attachment.
For the normal 8MHz ISA bus each clock cycle is 125ns in duration. A 0-wait state bus cycle spans two clock cycles, so a single 0-wait state transfer would require 250ns. A standard 8-bit bus cycle includes 4-wait states, so each transfer would require 750ns, but this can be shortened to a 1-wait state cycle where each transfer would require 375ns. A standard 16-bit bus cycle includes 1-wait state, so each transfer would require 375ns. According to one of my references, a 0-wait state 16-bit bus cycle is possible for memory devices only.
You refer to basic in your code, what basic are you using?
Michael
I use pds7.1
I am glad you have corrected me on the value of each tick.
Stupidly, I simply divided 1000msec / 18.2
Below the (corrected) snippet of code.Times.NowTime is updated by the Tick
Times.PassCount = Times.PassCount + 1 'increment the program loops
Times.Msec = clng((Times.NowTime * 54.9254) + (Times.Increment \ 1000 ))
'-----------------------------------------------
IF Times.NowTime > Times.OldTime THEN 'if a tick has happened ,recalculate
Times.Increment = (54.9254 / Times.PassCount) * 1000 '55 msec / Nr of Passes = msec per pass(make it uSec)
Times.OldTime = Times.NowTime
Times.PassCount = 0
END IF
Looking at your ISA i/o timing, leads me to conclude:
Using 2 IN AL,DX statements and your time value.(must have a look at the BIOS and see the Wait states)
375nSEc x 2 = 750nSec x 2000 = 1500uSec just for reading 16 bits on the bus in 1 sec.
Looking at it another way,
The IRQ occurs precicely every 500uSec
If 1 loop takes 47 to 48 uSec then there is one 750nsec ISA access every 10 loops.
So every 10th loop takes 750nsec longer. I am never going to see that.
PS.
My I/O boards have a 8254 timer on board, and I have a lot of experience driving them for liquid pulse counting,
clock dividers, interrupt on terminal count etc.
I will try your Timing routine again.
Edit: Tried it and had to press Ctrl/Break twice to get it to respond pause ?
anyhow 34 uSec on 133Mhz Industrial CPU.
However, nobody is replying on the rules of when and when not to use CLI & STI.
Although I have solved my problem, I solved it by trial and error, not by knowing the rules.
Regards
Dinosaur
I can't recall ever seeing any specific rules regarding when and when not to use CLI and STI in an interrupt handler. AFAIK the rules are basically the same as for any real-mode program -- hardware interrupts must be disabled while performing tasks that cannot be interrupted (switching stacks, for example), but they should otherwise be enabled. The workable time limit for keeping interrupts disabled would depend on what else was running on the system and the maximum interrupt rate for any hardware interrupt in use.
One thing that is unique to a hardware interrupt handler is the need to issue an EOI to the interrupt controller. Until this is done the controller will inhibit requests from all but higher-priority devices.
Dinosaur,
the rule I use for setting/clearing interrupt flags is ..
Only ever use CLI/STI outside of an interrupt routine and then only at the critical times when you really must not be interrupted, such as when updating an interrupt vector.
An interrupt vector takes 2 writes to update it so you must disable interrupt while doing it in case an interrupt occurs while the new vector is only half updated.
Setting and clearing of the interrupt flag takes care of itself during the interrupt routine so you shouldn't need to mess with it then.
<<I suspected that someone may pick up on the amount of interupts per second>>
2000 interrupts per second is no big deal. From memory, the interrupt response time for that type of CPU is around 5us so, if the interrupt handler is short you could cope with 10,000+ interrupts per second without difficulty.
<<So, are you saying that when an interupt is detected by DOS>>
Interrupts aren't detected by DOS. They're a hardware thing.
When the CPU detects an interrupt it pushes the flags and instruction pointer, clears the interrupt enable flag and calls the interrupt routine through the required vector.
So, when entering the interupt service routine the interrupt enable flag is always zero, preventing further interrupts until the IRET instruction which restores the original flags, including the set IF, which re-enables the interrupts.
It has to be done this way as the interrupt is a level sensitve input. If the interrupts were not disabled immediately by the CPU then the interupt would immediately occur again and the CPU would stall.
The flag can be set by the programmer early (before IRET) but this isn't usually needed as the interrupt routine should not take long to execute.
If you do have a very involved interrupt service routine then you can re-enable interrupt but you must first clear the cause of the interrupt otherwise the same interrupt will immediately re-occur and you'll crash the CPU.
<<Sounds like the conclusion is that I dont need to cli if I am chaining to the existing interupt handler>>
That's right.
Michael,
<<In my references the clock frequency for the system timer is >>
The timer runs at 1,193,181.666Hz.
This comes from the color sub carrier frequency of NTSC TVs which was 3.579545MHz. The crystal used to produce this ran at 4x that frequency = 14.318180MHz.
When the PC was designed, that crystal was far cheaper than any other because it was made by the million for TVs so that was chosen as the reference frequency for all PCs.
The PC divides it by 12 instead of 4 to give 14318180/12=1.19318166MHz
Dioxin, many thanks for your reply.
Quote
It has to be done this way as the interrupt is a level sensitve input. If the interrupts were not disabled immediately by the CPU then the interupt would immediately occur again and the CPU would stall.
My experience does not support this. I was under the impression that the IRQ line is triggered on a rising edge.
The signal I use the trigger the IRQ is Lo for about 140uSec and when it goes Hi, stays Hi for 360uSec.
The handler simply reads two 8bit words from an address and then chains to the Old Handler.
Running it in Codeview and watching the IRQ on a Cro confirms that my handler only gets the attention after each Lo to Hi transition. ?
With reference to the timer clock, I will have to test to see, what the increment is.
Regards
Dinosaur
Dinosaur,
the source of confusion is that the INTR line on the CPU is not the line you are driving. You probably have a PIC (Programmable Interrupt Controller, usually 8259A) which drives the CPU INTR line and you'll be driving one of the IRx lines of the PIC.
The PIC can be programmed to be either level or edge triggered.
The CPU INTR line is always level triggered.
The PC designers made an error when they first built the PC by programming the PIC to be edge triggered. This made it difficult to share interrupt lines between processes.
Paul.
Hi Paul,
Now that you mention it I recall IBM's choice of a common TV crystal, and that the frequency was divided by 3 to derive the 4.77 MHz processor and bus clocks (for the original PC), and divided again by 4 to derive the timer input frequency. But it seems to me that the frequency tolerance in combination with temperature effects would limit the significant digits to around 5, so even the 1193180 value was probably over-specified.
Quote
If you do have a very involved interrupt service routine then you can re-enable interrupt but you must first clear the cause of the interrupt otherwise the same interrupt will immediately re-occur and you'll crash the CPU.
This would be so for an MCA system, where the IRQ lines are level triggered, or for an EISA system if the particular IRQ were operating in level triggered mode, but not for a "standard" PC (or at least not without reprogramming the interrupt controller). Per the Intel documentation for the 8259A: "The IR input can remain high without generating another interrupt." The attachment contains a small program that demonstrates this.
[attachment deleted by admin]
Michael,
the tolerance of the cheapest TV crystal is typically 50ppm untrimmed (which is cheaper so that's what the PC will do!) so the frequency won't be exact but could vary by +/- 60Hz from the nominal 1.193181MHz. It's about as accurate as a cheap wristwatch (in the order of 5 seconds a day).
<<"The IR input can remain high without generating another interrupt." >>
The IR input (to the PIC) can remain high but the INTR pin on the CPU must be cleared if you want to re-enable interrupts while still in the interrupt service routine. In the case of a standard PC with the 8259A then you would need to ensure that the End of Interrupt command is issued to the PIC before re-enabling interrupts so the PIC will de-activate the INTR line to the CPU.
It's still a bad idea to do this, there really shouldn't be any need to mess with the CPU interrupt enable flag DURING an interrupt service routine. It's just asking for trouble.
I realise now that I might have caused more confusion than I should. My original comments were meant to address the title of the thread "When to use CLI and when not." and I did so from the point of view of the CPU which has the 1 INTR line to handle the interrupts. You should only use CLI/STI outside of the Interrupt service routine and only where you absolutely must stop interrupts occuring such as during updating of interrupt vectors or interfacing with certain hardware.
Of course, from the users point of view, the INTR line of the CPU isn't the one with access, it's the IRx lines of the PIC which can be programmed to behave differently.
Paul.
Hi all
Once again I have learned from the group.
I misread your message Paul, and of-course I was talking about the Interupt controller and you were not.
I now have a much better idea on where and when to use the CLI / STI,
and after looking at my code, there is no need at all. All the bit/bashing has been replaced.
Except of course during the re-assignment of the handler.
I was going to check Paul's timing, but realised that the only reference I have is another crystal
on the adc board, which would suffer from the same inaccuracies as mentioned.
Bit like using your bathroom scales to complain about an underweight packet of chips.
Anyhow, the mSec timer is only used to time crtitical functions that happen hundreds of times per minute,
so an error of 5 sec in 24hrs will make little difference on function that may take 80 or 81 mSec.
Thanks for all the help. :U
Regards
Dinosaur
Hi all
Sounds like I jumped the gun.
Still having trouble.
The routine below is called by Basic to clear memory in "Unreal" mode.
Works great, as long as I leave the CLI /STI in place, otherwise it hangs.
Suspect that setting DS to zero is the reason.
An Interupt not pushing/popping DS ??, so I checked out all interupt routines.
Changed the pushing of individual registers used, to Pusha and Popa to no avail.
Any suggestions.?
;--------------------------------------
;WipeMem puts 0 in Unreal Flat Memory
; 29-12-2004
;--------------------------------------
.MODEL MEDIUM
.486
.CODE
PUBLIC ClrMem
;------------------
ClrMem PROC FAR
PUSH BP
MOV BP,SP
PUSHA ;EDIT This causes hang as well, DS must be pushed seperately.
;------------------
cli
xor eax,eax
mov ds,ax ;DS = AX = 0 ..Use LINEAR ADDRESSING !
mov eax,400000h ;Start is beginning of 4th MegaByte
mov ecx,200000h ;do it up to start of 6th Megabyte
Clear:
mov word ptr DS:[EAX],00h ;Clear memory !!!
add eax,2
loopd Clear
sti
;------------------
POPA
POP BP
RET 2
ClrMem ENDP
;----------------------
END
Indexed addressing was never my strength, so perhaps a suggestion on how to
improve my indexing would also be appreciated.
Regards
Dinosaur
Quote
The IR input (to the PIC) can remain high but the INTR pin on the CPU must be cleared if you want to re-enable interrupts while still in the interrupt service routine. In the case of a standard PC with the 8259A then you would need to ensure that the End of Interrupt command is issued to the PIC before re-enabling interrupts so the PIC will de-activate the INTR line to the CPU.
On a standard PC the interrupt controller will set the INT line inactive after the second interrupt acknowledge cycle, before the interrupt is actually processed. You can verify this on page 17-18 of the 8259A data sheet available here:
http://www.electro-tech-online.com/datasheets/8259a_intel.pdf
Quote
It's still a bad idea to do this, there really shouldn't be any need to mess with the CPU interrupt enable flag DURING an interrupt service routine. It's just asking for trouble.
I regard it as normal practice, when necessary, and I know I'm not alone in this. The IRQ0 and IRQ1 handlers for the recent AWARD BIOS that I checked just now both execute an STI as the second instruction. While I do agree that most of the time there is no need to enable interrupts, situations can arise where you need to do significant processing in the handler,
before chaining to the previous handler, and where you cannot leave the higher-priority interrupts disabled.
Dinosaur,
PUSHA and POPA do not affect the segment registers.
Procedures called by the Microsoft DOS basics must generally preserve the direction flag and the BP, DI, SI, DS, and SS registers. You could use ES without preserving it.
The procedure does not appear to take an argument, but your return instruction is adjusting the stack pointer on return.
I don't know what else is running in your system, but if any PM code writes to a segment register the segment limit could be altered. Beyond that, I have no idea how leaving interrupts enabled could cause a problem.
Michael
The routine was actually written as a Function, so that it can return a value.
Functions are also easier to implement in complex Basic code.
The penalty is that you have to clean up the stack even if you dont send a value back.
I will try saving all the flags and segment registers and see if that allows me to remove the CLI/STI
Regards
Dinosaur
A function has the same calling conventions as a sub procedure. If no arguments are pushed onto the stack before the call, then none need be removed on return. The unnecessary SP adjustment would have no effect if the function were called only from module-level code, but if it were called from a procedure the program would crash when that procedure returned to the wrong address.
Michael,
<<On a standard PC the interrupt controller will set the INT line inactive after the second interrupt acknowledge cycle, before the interrupt is actually processed.>>
Not so. At least not necessarily so.
This only happens if the PIC is set to do an automatic end of interrupt. I don't know about yours but I have a PC which is definitely NOT set to do that and requires a manual EOI command to be sent to the PIC in order to clear it.
<<just now both execute an STI as the second instruction>>
What's the first instruction? If it isn't a call to the main processing routine then there are serious problems.
Imagine this scenario.
INT0 and INT7 occur simultaneously.
You expect the high priority INT0 to be processed and then the low priority INT7.
If your 2 statements are correct then the following will happen instead..
INT0 and INT7 lines go high.
The 2 corresponding bits in the PIC IRR get set.
The PIC correctly prioritises these and knows that the vector to pass to the CPU is that of INT0 as it is the higher priority.
The INT0 vector goes to the CPU which acknowledges this and immediately causes the automatic EOI to clear the PIC ISR bit for INT0.
The interrupt is just about to get CPU time when..
The PIC now sees that INT0 is no longer in service as it was automatically cleared immediately but there is a pending INT7.
The PIC sees INT7 as the next priority interrupt and issues an interrupt request to the CPU.
At this point, the CPU would normally ignore the request because the interrupt enable flag of the CPU is clear.. but you say your interrupt routine immediately re-enables the interrupts by executing STI. So, in your case, the high priority INT0 routine gets to the second instruction (STI) and the CPU is immediately interrupted by the low priority INT7 which is pending.
INT7 now completes (assuming no further interrupts occur) and, on return, INT0 completes.
If what you say is true then the the lower priority interrupts end up being given priority over the high priority ones!
I'd guess that the Automatic EOI is not being used and that the interrupt service routine is clearing the interrupt manually before it exits.
It still has the problem that, with the STI at the start of the interrupt service routine, that the routine must be carefully written to be re-entrant and able to cope with being called multiple times before it completes the first one, i.e.it needs to cope with nested interrupt calls, otherwise interrupt calls will be missed. I know it's not impossible, but it's an unnecessary complication.
It should be very rare for an interrupt routine to take long enough to be a problem. If it is then certain critical stuff needs to be done before the STI so that the routine can correctly handle, in effect, multiple interrupts of the same priority being processed at different points in the same interrupt service routine.
<<The IRQ0 and IRQ1 handlers for the recent AWARD BIOS that I checked>>
Strange, I could see it being useful for lower priority interrupts to allow the higher riority ones to jump in in certain circumstances, but INT0 should never need it, what else other than another INT0 can interrupt it?
Paul.
Hi all
DECLARE FUNCTION ClrMem& (A&)
IF ClrMem&(0) <> 0 THEN ;one way to use it if a value is passed
A& = ClrMem&(0) ;the other way. A& is ignored if no value passed.
When running through codeview, it showed an imbalance on return of 2, so RET 2
Quote
but if it were called from a procedure the program would crash when that procedure returned to the wrong address.
How does CLI/STI make difference in this case.?
Regards
Paul,
Just to make sure we are both on the same page here, by standard PC I mean one that follows the industry standard design that was originally based on the IBM PC-AT. While not all PCs have followed this design (EISA and MCA systems did not), the vast majority have. Within this design, the hardware interrupt subsystem is programmed (by the BIOS) just as it was for the PC-AT, and the hardware interrupts work just as they did for the PC-AT. This means no automatic EOI, and edge-sensitive triggering for the interrupt requests.
The scenario that you describe would be correct for automatic EOI mode. In AP-59, Using the 8259A Programmable Interrupt Controller, Intel describes the problem with automatic EOI disturbing the fully nested mode, as well as the problem with "over nesting", where an IR input keeps interrupting its own routine.
In the data sheet that I linked, Figure 10 on page 18 clearly shows that INT goes inactive at the end of the second interrupt acknowledge cycle. AP-59 includes this same figure, as well as:
Quote
When the IR input is in an inactive state (LOW), the edge sense latch is set. If edge sensitive triggering is selected, the "Q" output of the edge sense latch will arm the input gate to the request latch. This input gate will be disarmed after the IR input goes active (HIGH) and the interrupt request has been acknowledged. This disables the input from generating any further interrupts until it has returned low to re-arm the edge sense latch. If level sensitive triggering is selected, the "Q" output of the edge sense latch is rendered useless. This means the level of the IR input is in complete control of interrupt generation; the input won't be disarmed once acknowledged.
...
Immediately after the interrupt acknowledge sequence, the PR sets the corresponding bit in the ISR which simultaneously clears the edge sense latch. If edge-sensitive triggering is used, clearing the edge sense latch also disarms the request latch. This inhibits the possibility of a still active IR input from propagating through the priority cell. The IR input must return to an inactive state, setting the edge sense latch, before another interrupt request can be recognized. If level sensitive triggering is used, however, clearing the edge sense latch has no effect on the request latch. The state of the request latch is entirely dependent on the IR input level. Another interrupt will be generated immediately if the IR level is left active after its ISR bit has been reset. An ISR bit gets reset with an End-of-Interrupt (EOI) command issued in the service routine.
And the program in the attachment that I posted, the code for which is repeated below, run on the two systems that I currently have available, clearly indicates that each interrupt event is generating only one call to the handler.
.model small, c
.386
.stack
.data
.code
.startup
; Put data handlers need to access in code segment.
jmp @F
prevIRQ0Handler dd 0
prevIRQ1Handler dd 0
counter dw 0
@@:
; Hook interrupt 8.
mov ax,3508h
int 21h
mov word ptr prevIRQ0Handler,bx
mov word ptr prevIRQ0Handler[2],es
push ds
mov ax,2508h
push cs
pop ds
mov dx,offset IRQ0Handler
int 21h
pop ds
; Hook interrupt 9.
mov ax,3509h
int 21h
mov word ptr prevIRQ1Handler,bx
mov word ptr prevIRQ1Handler[2],es
push ds
mov ax,2509h
push cs
pop ds
mov dx,offset IRQ1Handler
int 21h
pop ds
; Wait for user to press Escape.
@@:
mov ah,0
int 16h
cmp al,27
jne @B
; Unhook and exit.
push ds
lds dx,prevIRQ0Handler
mov ax,2508h
int 21h
pop ds
push ds
lds dx,prevIRQ1Handler
mov ax,2509h
int 21h
pop ds
.exit
; =========================================================
; This will display once per 18 ticks.
; =========================================================
IRQ0Handler:
; Enable interrupts immediately.
sti
inc cs:counter
.IF (cs:counter == 18)
mov cs:counter,0
push ax
push bx
; Display a '0'.
mov bx,0
mov ah,0eh
mov al,'0'
int 10h
pop bx
pop ax
.ENDIF
; Chain to the previous handler.
jmp cs:prevIRQ0Handler
; =========================================================
; This will be called at least once each time a key is
; pressed or released, and more than once for most of
; the extended keys.
; =========================================================
IRQ1Handler:
; Enable interrupts immediately.
sti
push ax
push bx
; Display a '1'.
mov bx,0
mov ah,0eh
mov al,'1'
int 10h
pop bx
pop ax
; Chain to the previous handler.
jmp cs:prevIRQ1Handler
end
The AWARD BIOS that I checked is v4.51PG, 09/19/2000, on a GigaByte GA-5AX. The first instruction at the entry points is a near jump to the actual handler, and the first instruction in the handler is an STI.
Dinosaur,
I don't understand. If you are passing a parameter to the function, it is being ignored, and the function as coded will always return zero.
Michael
You are right, it always returns Zero. However it still performs it's task of clearing the memory.
Some people call it laziness, I call it forward planning.
If I declare all functions to accept a command and return a value, then whenever I need to expand the function
a simple change of code is all that is needed in the asm function. The Basic code does not need to be changed other
then the value it checks for. You could argue that it will be slower having to push and pop values (that dont exist) .
To satisfy my curiosity, I ran it as a routine to confirm the CLI/STI problem, but it still exists.
I will try to push flags and other registers. Have just been a bit busy.
Regards
Dinosaur