One at a time, but it won't take much memory to process if you only need process them line by line. That's quite a bit of data though.
Even these days I'd be hesitant to feed a 1GB file to an editor, mind. I'd expect it to work but it might get very slow.
The files can all be placed in one folder if need be.
The files would be named something like:
lichess_db_standard_rated_2013-01.pgn
lichess_db_standard_rated_2013-02.pgn
lichess_db_standard_rated_2013-03.pgn
etc.
They're all taken from here: https://database.lichess.org/
For the larger files, I've split them using pgnsplit into 1GB chunks, which would be named something like:
lichess_db_standard_rated_2018-01.1.pgn
lichess_db_standard_rated_2018-01.2.pgn
etc.
Each PGN ranges in size from about 100MB-1.5GB and contains about 100,000-1,500,000 games. I'm thinking that doing them in batches of a few at a time might be best, given their massive size.
Truth be told, though, I'm starting to think that I should abandon this idea, at least for the time being. I'm already about half-way through my project of converting said databases to ChessBase format and compiling separate databases for each month and one massive database with everything in it. I think I'm just going to unannotate everything using 'Unannotate DB' in CB for now (makes for a much cleaner-looking readout anyway, plus the annotations file [.cba] is massive, by far the biggest file in the database—might help to shrink the size of the database somewhat). Although it still might be useful to know how to do this for future reference.
Anyway, if anyone has any interest in a .CBV containing 400+ million chess games, let me know and I can create a torrent or something once it's done (takes quite a bit of time and effort).