- What is the High School Graduation Rate?
- What is the Percent with an associate's degree?
- What is the College Graduation Rate?
- What is the Percent with a graduate or professional degree?
- What is the Population Count?
- What is the Median Earnings?
- What is the Number of Employees?
- What is the GDP per capita?
- What is the Annual Personal Income?
- What is the cost of living index?
The percent who did not finish the 9th grade of New Orleans Metro Area (LA) was 4.90% in 2017.
Education and Graduation Rates Datasets Involving New Orleans Metro Area (LA)
- API daisi.datacenterresearch.org | Last Updated 2018-07-18T21:38:07.000Z
Percent of public school students truant 2008-2015 in New Orleans and Louisiana
- API data.lacity.org | Last Updated 2018-09-25T22:04:58.000Z
All-City event calendar - ARCHIVED For the new LA City Events dataset (refreshed daily), see https://data.lacity.org/A-Prosperous-City/LA-City-Events/rx9t-fp7k
- API daisi.datacenterresearch.org | Last Updated 2018-07-17T20:58:51.000Z
Third grade students at each achievement level in spring 2016 in English language arts in New Orleans and Louisiana
- API opendata.utah.gov | Last Updated 2014-10-31T18:32:26.000Z
Number Of People Aged 25 Older With High School Diploma Or Equivalent All States
- API opendata.utah.gov | Last Updated 2015-03-17T17:36:47.000Z
Percent Of Children Tested Annual Blood Lead Levels All States
- API www.datos.gov.co | Last Updated 2018-03-10T01:14:48.000Z
<b>DESCRIPTION</b> The dataset contains statistics of word occurrences (frequencies) from a text corpora obtained from Wikipedia in 32 languages. The size of the corpus for each language is balanced approximately around 5,000,000 million of words. The attached “metadata.cvs” file contains the ISO 639-1 code of each language, the Spanish name of the language, the English name of the language, the total number of Wikipedia articles in the corpus, the total number of words, the vocabulary size and the source URL of the Wikipedia dump for each corpus. The corpus for each language corresponds to the sequence of the first articles in the Wikipedia dump until a threshold of 5,000,000 words was reached. The words boundaries are identified by a simple tokenizer based on the space character as separator and other punctuation marks. The data was collected during October and November 2015.<br> <b>NOTES</b> Note 1: as the vocabulary size differs for each language, the number of data rows for each language varies too. The empty words can be identified with the “%” word and zeroes (0) in the RANK and FREQ columns.<br> Note 2: if you want to open with MS-Excel the .csv file, please have in mind that the file is encoded UTF-8 and that information is not in the file. So, to right way for opening it and make all characters recognized adequately, please follow these steps: <br> 1. Create a new blank spreadsheet (or Blank workbook). 2. Go to the “Data” menu or tab. 3. Select “Get External Data” 4. Select “From Text” 5. Select the downloaded .csv file 6. Select the “Unicode UTF-8” encoding and “Delimited” as file type 7. Select “comma” as field delimiter and double quotes “ as text qualifier. <br> Be aware that the file is relatively large for Excel (0.3GB), so the importing process could take several minutes. In the following link there is a figure obtained with this data showing the Zipf’s law. <br> <a href=https://en.wikipedia.org/wiki/Zipf%27s_law#/media/File:Zipf_30wiki_en_labels.png>https://en.wikipedia.org/wiki/Zipf%27s_law#/media/File:Zipf_30wiki_en_labels.png</a><br> This dataset is a collaborative effort of the students of the course “Análisis Computacional del Lenguaje” taught in November 2015 at the Instituto Caro y Cuervo, Bogotá D.C., Colombia by Professor <a href=”https://sites.google.com/site/sergiojimenezvargas/”>Sergio Jiménez</a>.