I'm wondering why Bally used 120ohm resistor pull-ups on their solenoid driving transistors (TIP120 equivalent) and really why they did the pre-driving the way they did it. It seems like it's really just a waste of electricity when the solenoids aren't firing.
My understanding of the circuit is that all 19x transistors are theoretically ready-to-fire via the 120ohm pull-up on each circuit. The only thing that prevents them from firing is the 3081 chip that shunts the current from the 120ohm resistor pull-up to GND. So that means the majority of the time when solenoids aren't actually firing, 41.7mA of current is used per transistor circuit. 19 transistors x 41.7 = 792mA load on the Solenoid Driver Board, just to keep all the transistors from firing.
Without the 3081 chip in there (say you removed it) or if it's bad/shorted... the solenoids that it was keeping from firing (via the shunt of the NPN base to GND) will actually just fire as soon as the machine is turned on. There was a recent thread with that happening & some looking back at RGP history also revealed some discussion about how this all works.
So why did Bally choose 120ohm pull-up instead of something of higher value? Speed to activate? And on that note, why not have done some kind of inverter or pre-driver circuit here instead?
I'm wondering if there's a modification that can be done to these boards so they work "as original" but the current can be greatly reduced. 800mA just to keep coils from firing that are inactive most of the time just seems a bit ridiculous to me at this point. I know Bally engineers were smart guys, just not understanding why they did what they did here.. and why there can't be a better solution.