https://github.com/jhalterman/failsafe
https://github.com/OpenFeign/feign
Feign makes writing java http clients easier
https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-feign.html
https://metrics.dropwizard.io/4.0.0/
https://github.com/dropwizard/metrics/issues/515
https://github.com/hamcrest/JavaHamcrest/wiki/Related-Projects
https://github.com/lukas-krecan/JsonUnit
Seems there is no java lib that can use multiple character delimiter
C# https://github.com/JoshClose/CsvHelper
http://stackoverflow.com/questions/22137343/java-csv-parser-comparisons
http://www.journaldev.com/2544/java-csv-parser
http://opencsv.sourceforge.net/
https://super-csv.github.io/super-csv/examples_dozer.html
CsvDozerBeanReader is the most powerful CSV reader. T
https://super-csv.github.io/super-csv/xref-test/org/supercsv/mock/dozer/r.html
http://stackoverflow.com/questions/13578428/duplicate-headers-received-from-server
So there is a way to let it still have commas in the filename. Just have to quote the filename.
That was exactly my problem - a comma in the filename AND only Chrome had an issue with that.
https://techblog.chegg.com/2014/06/06/how-to-teach-jersey-to-speak-csv/
Jersey CSV Writer
https://github.com/CheggEng/JerseyCSV
Use fastxml
http://blog.xyleolabs.com/2015/05/making-your-jersey-client-post-csv-data.html
http://www.javaprocess.com/2015/08/a-simple-csv-messagebodywriter-for-jax.html
CSV Parser
http://howtodoinjava.com/core-java/related-concepts/parse-csv-files-in-java/
http://www.journaldev.com/2544/java-csv-parserwriter-example-using-opencsv-apache-commons-csv-and-supercsv
SuperCSV provides us option to have conditional logic for the fields that is not available with other CSV parsers.
SuperCSV
Using SuperCSV to Change Header Values
http://stackoverflow.com/questions/21942042/using-supercsv-to-change-header-values
https://super-csv.github.io/super-csv/preferences.html
https://super-csv.github.io/super-csv/examples_writing.html
Sftp in java
Spingbatch default cursor reader reads data readily. It has below advantages over jdbctemplate:
1. Traditional jdbctemplate reader reads data in one time, which makes wait for a very long time. But cursor reader reads steadily, we don’t need to wait.
2. Jdbctemplate reads in one time, and put everything in memory. But cursor reader can avoid this problem by reading item one by one.
http://demeranville.com/how-not-to-parse-csv-using-java/
CsvMapper mapper = new CsvMapper();
mapper.enable(CsvParser.Feature.WRAP_AS_ARRAY);
File csvFile = new File("input.csv"); // or from String, URL etc
MappingIterator<Object[]> it = mapper.reader(Object[].class).readValues(csvFile);
http://demeranville.com/writing-csv-using-jackson-csvmapper-mixin-annotations/
@JsonPropertyOrder(value = { "name", "dob"})
public class Person implements Serializable {
private Date dob;
private String name;
@JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd")
public Date getDob(){
return dob;
}
public String getName(){
return name;
}
//setters etc...
}
http://superuser.com/questions/234997/how-can-i-stop-excel-from-eating-my-delicious-csv-files-and-excreting-useless-da
File outputJson) throws IOException {
List<Book> books = new ArrayList<>();
List<CSVRecord> records = CSVFormat
.DEFAULT
.withHeader(TITLE.name(), AUTHOR.name(),
PAGES.name(), CATEGORY.name())
.parse(new FileReader(inputCsv))
.getRecords();
for (CSVRecord record : records) {
String title = record.get(TITLE);
String author = record.get(AUTHOR);
Integer pages = Integer.parseInt(record.get(PAGES));
Category category = Category.valueOf(record.get(CATEGORY));
books.add(new Book(title, author, pages, category));
}
ObjectMapper mapper = new ObjectMapper();
mapper.writeValue(outputJson, books);
}
Failsafe.with(retryPolicy).run(() -> connect());
Circuit breakers are a way of creating systems that fail-fast by temporarily disabling execution as a way of preventing system overload. Creating a CircuitBreaker is straightforward:
CircuitBreaker breaker = new CircuitBreaker()
.withFailureThreshold(3, 10)
.withSuccessThreshold(5)
.withDelay(1, TimeUnit.MINUTES);
We can then execute a
Runnable
or Callable
with the breaker
:Failsafe.with(breaker).run(this::connect);
https://github.com/OpenFeign/feign
Feign makes writing java http clients easier
https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-feign.html
https://metrics.dropwizard.io/4.0.0/
https://github.com/dropwizard/metrics/issues/515
https://github.com/hamcrest/JavaHamcrest/wiki/Related-Projects
https://github.com/lukas-krecan/JsonUnit
Seems there is no java lib that can use multiple character delimiter
C# https://github.com/JoshClose/CsvHelper
http://stackoverflow.com/questions/22137343/java-csv-parser-comparisons
http://www.journaldev.com/2544/java-csv-parser
http://opencsv.sourceforge.net/
https://super-csv.github.io/super-csv/examples_dozer.html
CsvDozerBeanReader is the most powerful CSV reader. T
https://super-csv.github.io/super-csv/xref-test/org/supercsv/mock/dozer/r.html
25 private int age; 27 private Boolean consentGiven; 29 private List<Answer> answers;https://super-csv.github.io/super-csv/xref-test/org/supercsv/example/dozer/Reading.html
beanReader = new CsvDozerBeanReader(new FileReader(CSV_FILENAME), CsvPreference.STANDARD_PREFERENCE); beanReader.getHeader(true); // ignore the header beanReader.configureBeanMapping(SurveyResponse.class, FIELD_MAPPING);https://super-csv.github.io/super-csv/xref-test/org/supercsv/example/dozer/Writing.html
http://stackoverflow.com/questions/13578428/duplicate-headers-received-from-server
The server SHOULD put double quotes around the filename
Response.AddHeader("Content-Disposition", "attachment;filename=\"" + filename + "\"");
I have also found that comma in the filename will give that error (in Chrome only). I am thinking there must be a way to tell it that the filename="abc,xyz.pdf" is valid. I get that we can replace the "," with something else, but I want to preserve and return the filename exactly as it isSo there is a way to let it still have commas in the filename. Just have to quote the filename.
Response.AddHeader("content-disposition", "attachment; filename=\"" + FileNameWithCommas + "\"");
That was exactly my problem - a comma in the filename AND only Chrome had an issue with that.
https://techblog.chegg.com/2014/06/06/how-to-teach-jersey-to-speak-csv/
Jersey CSV Writer
https://github.com/CheggEng/JerseyCSV
Use fastxml
http://blog.xyleolabs.com/2015/05/making-your-jersey-client-post-csv-data.html
http://www.javaprocess.com/2015/08/a-simple-csv-messagebodywriter-for-jax.html
public
class
CSVMessageBodyWritter
implements
MessageBodyWriter<list> {
@Override
public
boolean
isWriteable(Class type, Type genericType, Annotation[] annotations, MediaType mediaType) {
boolean
ret=List.
class
.isAssignableFrom(type);
return
ret;
}
@Override
public
long
getSize(List data, Class aClass, Type type, Annotation[] annotations, MediaType mediaType) {
return
0
;
}
@Override
public
void
writeTo(List data, Class aClass, Type type, Annotation[] annotations, MediaType mediaType, MultivaluedMap<string object=
""
> multivaluedMap, OutputStream outputStream)
throws
IOException, WebApplicationException {
if
(data!=
null
&& data.size()>
0
) {
CsvMapper mapper =
new
CsvMapper();
Object o=data.get(
0
);
CsvSchema schema = mapper.schemaFor(o.getClass()).withHeader();
mapper.writer(schema).writeValue(outputStream,data);
}
}
}
CSV Parser
http://howtodoinjava.com/core-java/related-concepts/parse-csv-files-in-java/
http://www.journaldev.com/2544/java-csv-parserwriter-example-using-opencsv-apache-commons-csv-and-supercsv
SuperCSV provides us option to have conditional logic for the fields that is not available with other CSV parsers.
SuperCSV
Using SuperCSV to Change Header Values
http://stackoverflow.com/questions/21942042/using-supercsv-to-change-header-values
you can put whatever you like in the header - it doesn't have to be identical to the mapping array passed to
beanWriter.write()
For example, the following will give the output you desire:
final String[] header = new String[] { "First Name", "Last Name", "Birthday"};
final String[] fieldMapping = new String[] { "firstName", "lastName", "birthDate"};
// write the header
beanWriter.writeHeader(header);
// write the beans
for( final CustomerBean customer : customers ) {
beanWriter.write(customer, fieldMapping , processors);
}
http://stackoverflow.com/questions/21427523/how-to-always-apply-quotes-escape-in-super-csv
There's extensive documentation on the Super CSV website on how to configure custom preferences - the feature you're after is the quote mode.
Here's an example of using the
AlwaysQuote
quote mode, which quotes every field even if it doesn't contain special characters.CsvPreference prefs = new CsvPreference.Builder('"',';',"\n")
.useQuoteMode(new AlwaysQuoteMode()).build();
ICsvListWriter writer = new CsvListWriter(new OutputStreamWriter(System.out),
prefs);
writer.writeHeader("User2", "Web Page");
writer.flush();
Which prints:
"User2";"Web Page"
private static final CsvPreference STANDARD_SKIP_COMMENTS = new CsvPreference.Builder(CsvPreference.STANDARD_PREFERENCE).skipComments(new CommentStartsWith("#").build();
Update: Super CSV 2.1.0 (released April 2013) allows you to supply a
CommentMatcher
via the preferences that will let you skip lines that are considered comments. There are 2 built in matchers you can use, or you can supply your own. In this case you could use new CommentMatches("\\s+")
to skip blank lines.
beanReader =
new
CsvBeanReader(
new
FileReader(
"employees.csv"
),
CsvPreference.STANDARD_PREFERENCE);
// the name mapping provide the basis for bean setters
final
String[] nameMapping =
new
String[]{
"id"
,
"name"
,
"role"
,
"salary"
};
//just read the header, so that it don't get mapped to Employee object
final
String[] header = beanReader.getHeader(
true
);
final
CellProcessor[] processors = getProcessors();
Employee emp;
while
((emp = beanReader.read(Employee.
class
, nameMapping,
processors)) !=
null
) {
emps.add(emp);
}
}
finally
{
if
(beanReader !=
null
) {
beanReader.close();
}
}
private
static
CellProcessor[] getProcessors() {
final
CellProcessor[] processors =
new
CellProcessor[] {
new
UniqueHashCode(),
// ID (must be unique)
new
NotNull(),
// Name
new
Optional(),
// Role
new
NotNull()
// Salary
};
return
processors;
}
for( final CustomerBean customer : customers ) { beanWriter.write(customer, header, processors); }
listWriter.write(john, processors); listWriter.write(bob, processors);
// write the customer maps mapWriter.write(john, header, processors); mapWriter.write(bob, header, processors);http://www.allenlipeng47.com/blog/index.php/2015/07/18/sftp-in-java/
Sftp in java
public void downloadFilesFromSftp(String sftpFromDir, String downloadFileName, String toLocalDir) { JSch jSch = new JSch(); Session session = null; Channel channel = null; try { session = jSch.getSession(username, host, 22); session.setPassword(password); java.util.Properties config = new java.util.Properties(); config.put("StrictHostKeyChecking", "no"); config.put("PreferredAuthentications", "publickey,password"); session.setConfig(config); session.connect(); channel = session.openChannel("sftp"); channel.connect(); ChannelSftp channelSftp; channelSftp = (ChannelSftp) channel; channelSftp.cd(sftpFromDir); Vector<ChannelSftp.LsEntry> list = channelSftp.ls(downloadFileName); for(ChannelSftp.LsEntry entry : list) { try { Path filePath; filePath = Paths.get(toLocalDir + "/" + entry.getFilename()); Files.copy(channelSftp.get(sftpFromDir + "/" + entry.getFilename()), filePath, StandardCopyOption.REPLACE_EXISTING); }catch (IOException e){ logger.error("Encountered error when dealing with " + entry.getFilename() + ":" + e.toString()); } } } catch (Exception e) { logger.error(e.getMessage()); } finally { if(channel != null ){ channel.disconnect(); } if(session != null){ session.disconnect(); } } } public void uploadFilesFromSftp(String localFromDir, String uploadFileName, String toSftpDir) { JSch jSch = new JSch(); Session session = null; Channel channel = null; try { session = jSch.getSession(username, host, 22); session.setPassword(password); java.util.Properties config = new java.util.Properties(); config.put("StrictHostKeyChecking", "no"); config.put("PreferredAuthentications", "publickey,password"); session.setConfig(config); session.connect(); channel = session.openChannel("sftp"); channel.connect(); ChannelSftp channelSftp; channelSftp = (ChannelSftp) channel; DirectoryStream<Path> ds = Files.newDirectoryStream(Paths.get(localFromDir), uploadFileName); for (Path localFileFrom : ds) { try { channelSftp.put(new FileInputStream(new File(localFileFrom.toString())), toSftpDir + "/" + localFileFrom.getFileName()); }catch (IOException e){ logger.error("Encountered error when dealing with " + localFileFrom.toString() + ":" + e.toString()); } } } catch (Exception e) { logger.error(e.getMessage()); } finally { if(channel != null ){ channel.disconnect(); } if(session != null){ session.disconnect(); } } }http://www.allenlipeng47.com/blog/index.php/2015/08/03/springbatch-cursor-reader/
Spingbatch default cursor reader reads data readily. It has below advantages over jdbctemplate:
1. Traditional jdbctemplate reader reads data in one time, which makes wait for a very long time. But cursor reader reads steadily, we don’t need to wait.
2. Jdbctemplate reads in one time, and put everything in memory. But cursor reader can avoid this problem by reading item one by one.
http://demeranville.com/how-not-to-parse-csv-using-java/
CsvMapper mapper = new CsvMapper();
mapper.enable(CsvParser.Feature.WRAP_AS_ARRAY);
File csvFile = new File("input.csv"); // or from String, URL etc
MappingIterator<Object[]> it = mapper.reader(Object[].class).readValues(csvFile);
2
3
4
5
6
7
8
9
10
11
12
|
public class CSVPerson{
public String firstname;
public String lastname;
//etc
}
CsvMapper mapper = new CsvMapper();
CsvSchema schema = CsvSchema.emptySchema().withHeader().withColumnSeparator(delimiter);
MappingIterator<CSVPerson> it = = mapper.reader(CSVPerson).with(schema).readValues(input);
while (it.hasNext()){
CSVPerson row = it.next();
}
|
http://demeranville.com/writing-csv-using-jackson-csvmapper-mixin-annotations/
1
2
3
4
5
|
public String toCSV (List<YourPojo> listOfPojos){
CsvMapper mapper = new CsvMapper();
CsvSchema schema = mapper.schemaFor(YourPojo.class).withHeader();
return mapper.writer(schema).writeValueAsString(listToSerialise);
}
|
@JsonPropertyOrder(value = { "name", "dob"})
public class Person implements Serializable {
private Date dob;
private String name;
@JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd")
public Date getDob(){
return dob;
}
public String getName(){
return name;
}
//setters etc...
}
And what if you want JSON to format one way and CSV to format dates another? What if different roles of users see different fields?
Mixins are annotated classes that Jackson can use to ‘overlay’ annotations onto un-annotated classes. For example, you could define the following pair of classes:
and ask Jackson to apply the annotations from the abstract class to the concrete one like this:
Cool! We’ve now decoupled our serialisation formats from our domain objects. We can also use this to select which fields to include when serialising using a combination of @JsonProperty and @JsonIgnore.
abstract class MixIn { MixIn(@JsonProperty("width") int w, @JsonProperty("height") int h) { } // note: could alternatively annotate fields "w" and "h" as well -- if so, would need to @JsonIgnore getters @JsonProperty("width") abstract int getW(); // rename property @JsonProperty("height") abstract int getH(); // rename property @JsonIgnore int getSize(); // we don't need it! }
and to configure our ObjectMapper we'd use:
objectMapper.getSerializationConfig().addMixInAnnotations(Rectangle.class, MixIn.class); objectMapper.getDeserializationConfig().addMixInAnnotations(Rectangle.class, MixIn.class);
http://superuser.com/questions/234997/how-can-i-stop-excel-from-eating-my-delicious-csv-files-and-excreting-useless-da
We had a similar problem where we had CSV files with columns containing ranges such as 3-5 and Excel would always convert them to dates e.g. 3-5 would be 3 Mar, after which switching back to numeric gave us a useless date integer. We got around it by
- Renaming the CSV to TXT extension
- Then when we opened it in Excel, this would kick in the text import wizard
- In Step 3 of 3 in the wizard we told it the columns in question were text and they imported properly.
You could do the same here I would think.
Use Apache common CSV
public void convertCsvToJson(File inputCsv,File outputJson) throws IOException {
List<Book> books = new ArrayList<>();
List<CSVRecord> records = CSVFormat
.DEFAULT
.withHeader(TITLE.name(), AUTHOR.name(),
PAGES.name(), CATEGORY.name())
.parse(new FileReader(inputCsv))
.getRecords();
for (CSVRecord record : records) {
String title = record.get(TITLE);
String author = record.get(AUTHOR);
Integer pages = Integer.parseInt(record.get(PAGES));
Category category = Category.valueOf(record.get(CATEGORY));
books.add(new Book(title, author, pages, category));
}
ObjectMapper mapper = new ObjectMapper();
mapper.writeValue(outputJson, books);
}