Nope, because the drive cannot erase a single physical/logical sector. NAND is only erasable in (much) larger drive allocation blocks, typically 4MB in size (expect this to get much larger as capacities go up, by the way, because it is a function of how the die is constructed and as density goes up it becomes easier to make the erase block size larger too.)
This means that any time you write data in less than the drive's internal erase block size you are inherently reading and rewriting unrelated information that is not in the file you have open for writing. Said data is likely "at rest" and may have been at rest for a very long time. The drive has no way to know what it is; it might be directory information, a file's blocks, the filesystem free space bitmap or something else.
In addition the drive has a mapping table between physical (on-disk) allocation blocks and offsets and logical (as seen by the OS) sectors. That has to be updated too. If the power goes off with the mapping table and data on the drive being incongruent you're screwed. Remember that the *mapping table* is subject to the same problem; it too is on NAND flash and can only be rewritten in 4MB chunks! This set of interdependencies mean that it is entirely possible for the mapping table to be damaged during an update (e.g. the power goes off while it's being written!) resulting in the destruction of huge amounts of data -- quite possibly (and frequently) including everything on the device.
Finally, the drives lie. They tell the OS that an operation is complete when the data has been changed in on-drive RAM, *not* when it has been committed and all the metadata updates in the drive are complete. fsync() is supposed to not return until the drive has committed everything. Spinning rust drives sometimes honor this and sometimes don't; SSDs almost-universally do *not* flush all their buffer memory including their internal mapping table to a consistent state before returning "complete" under this circumstance.
There's a little program running around called "diskchecker.pl" (it's a perl script) that will test all this on a pair of machines. The writer process runs on a machine that you leave powered; the I/O process runs on the machine you cord-pull. You start both and then while the writer is writing you yank the power cord on the I/O machine, then reboot it and restart the program on that box.
When it comes back up the writer process gets the restart notification. It knows what the I/O machine *says* the drive had committed (because the I/O process had received a "complete" from the drive and passed it back to the writer) and therefore supposedly was complete; it then goes back, starting at the beginning of what it wrote, and verifies that every byte it wrote and got confirmation on is actually there.
Nearly *all* SSDs fail this test, and most fail it dramatically with data corruption *far* from where the I/O was when the cord was yanked. If you can run this thing a half-dozen times and see no corruption you're odds-on to be ok. If you can run it a hundred times you can be very confident. Most SSDs will fail on the very first attempt and a good part of the time they won't even come back up with a coherent filesystem on them after the cord is replugged.
The Intel 730s are pretty-much the only "consumer" drives that I've seen pass this test. Their S3500/3700 series pass as well, but those are a LOT more expensive and are marketed as data center devices.
If you use an SSD as an "operating system and program" device only, have no personal and irretrievable data on it (that is all on a server somewhere, etc) and thus don't care if it gets corrupted because you can simply reload it then consumer-style SSDs are fine. Most people, however, keep a LOT of personal and irreplaceable data on their system and do not segregate it off on either a robustly-backed up file server or have some other system devised to prevent them from being hosed by a corrupted boot volume. If you're one of those "most" then using "consumer" style SSDs is literally playing with a device that may self-destruct your data without warning.
BTW if you think the machine being on a UPS makes it "safe" you're wrong. This is what one of my production systems, which is on a UPS and has *never* taken an unclean power loss (the UPS notifies it when the battery gets low and it does a controlled shutdown in that instance) says when I ask the drive about its history when it comes to unsafe shutdowns:
Model Family: Intel 730 and DC S35x0/3610/3700 Series SSDs
.....
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
174 Unsafe_Shutdown_Count -O--CK 100 100 000 - 19
Now that doesn't mean that I would have gotten screwed 19 times had it not had power protection, but it does mean that I *might* have, and this is on a system *with* 100% UPS coverage that has never failed during the time that drive has been installed. However, it has been shut down for maintenance and such and during those shutdowns 19 times the drive had power removed from it before it had managed to commit everything to stable storage and verify that it was there. This is what the drive firmware itself tells me, not what the operating system believes.
This is why I buy SSDs with functional power protection and if a manufacturer claims their drives have it I *verify* that claim before I trust them.