Class CfnDataSource.WebCrawlerConfigurationProperty
Provides the configuration information required for Amazon Kendra Web Crawler.
Inheritance
Namespace: Amazon.CDK.AWS.Kendra
Assembly: Amazon.CDK.AWS.Kendra.dll
Syntax (csharp)
public class WebCrawlerConfigurationProperty : Object, CfnDataSource.IWebCrawlerConfigurationProperty
Syntax (vb)
Public Class WebCrawlerConfigurationProperty
Inherits Object
Implements CfnDataSource.IWebCrawlerConfigurationProperty
Remarks
ExampleMetadata: fixture=_generated
Examples
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
using Amazon.CDK.AWS.Kendra;
var webCrawlerConfigurationProperty = new WebCrawlerConfigurationProperty {
Urls = new WebCrawlerUrlsProperty {
SeedUrlConfiguration = new WebCrawlerSeedUrlConfigurationProperty {
SeedUrls = new [] { "seedUrls" },
// the properties below are optional
WebCrawlerMode = "webCrawlerMode"
},
SiteMapsConfiguration = new WebCrawlerSiteMapsConfigurationProperty {
SiteMaps = new [] { "siteMaps" }
}
},
// the properties below are optional
AuthenticationConfiguration = new WebCrawlerAuthenticationConfigurationProperty {
BasicAuthentication = new [] { new WebCrawlerBasicAuthenticationProperty {
Credentials = "credentials",
Host = "host",
Port = 123
} }
},
CrawlDepth = 123,
MaxContentSizePerPageInMegaBytes = 123,
MaxLinksPerPage = 123,
MaxUrlsPerMinuteCrawlRate = 123,
ProxyConfiguration = new ProxyConfigurationProperty {
Host = "host",
Port = 123,
// the properties below are optional
Credentials = "credentials"
},
UrlExclusionPatterns = new [] { "urlExclusionPatterns" },
UrlInclusionPatterns = new [] { "urlInclusionPatterns" }
};
Synopsis
Constructors
WebCrawlerConfigurationProperty() |
Properties
AuthenticationConfiguration | Configuration information required to connect to websites using authentication. |
CrawlDepth | The 'depth' or number of levels from the seed level to crawl. |
MaxContentSizePerPageInMegaBytes | The maximum size (in MB) of a web page or attachment to crawl. |
MaxLinksPerPage | The maximum number of URLs on a web page to include when crawling a website. |
MaxUrlsPerMinuteCrawlRate | The maximum number of URLs crawled per website host per minute. |
ProxyConfiguration | Configuration information required to connect to your internal websites via a web proxy. |
UrlExclusionPatterns | A list of regular expression patterns to exclude certain URLs to crawl. |
UrlInclusionPatterns | A list of regular expression patterns to include certain URLs to crawl. |
Urls | Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl. |
Constructors
WebCrawlerConfigurationProperty()
public WebCrawlerConfigurationProperty()
Properties
AuthenticationConfiguration
Configuration information required to connect to websites using authentication.
public object AuthenticationConfiguration { get; set; }
Property Value
System.Object
Remarks
You can connect to websites using basic authentication of user name and password. You use a secret in AWS Secrets Manager to store your authentication credentials.
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
CrawlDepth
The 'depth' or number of levels from the seed level to crawl.
public Nullable<double> CrawlDepth { get; set; }
Property Value
System.Nullable<System.Double>
Remarks
For example, the seed URL page is depth 1 and any hyperlinks on this page that are also crawled are depth 2.
MaxContentSizePerPageInMegaBytes
The maximum size (in MB) of a web page or attachment to crawl.
public Nullable<double> MaxContentSizePerPageInMegaBytes { get; set; }
Property Value
System.Nullable<System.Double>
Remarks
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a web page or attachment is set to 50 MB.
MaxLinksPerPage
The maximum number of URLs on a web page to include when crawling a website.
public Nullable<double> MaxLinksPerPage { get; set; }
Property Value
System.Nullable<System.Double>
Remarks
This number is per web page.
As a website’s web pages are crawled, any URLs the web pages link to are also crawled. URLs on a web page are crawled in order of appearance.
The default maximum links per page is 100.
MaxUrlsPerMinuteCrawlRate
The maximum number of URLs crawled per website host per minute.
public Nullable<double> MaxUrlsPerMinuteCrawlRate { get; set; }
Property Value
System.Nullable<System.Double>
Remarks
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
ProxyConfiguration
Configuration information required to connect to your internal websites via a web proxy.
public object ProxyConfiguration { get; set; }
Property Value
System.Object
Remarks
You must provide the website host name and port number. For example, the host name of https://a.example.com/page1.html is "a.example.com" and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in AWS Secrets Manager .
UrlExclusionPatterns
A list of regular expression patterns to exclude certain URLs to crawl.
public string[] UrlExclusionPatterns { get; set; }
Property Value
System.String[]
Remarks
URLs that match the patterns are excluded from the index. URLs that don't match the patterns are included in the index. If a URL matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the URL file isn't included in the index.
UrlInclusionPatterns
A list of regular expression patterns to include certain URLs to crawl.
public string[] UrlInclusionPatterns { get; set; }
Property Value
System.String[]
Remarks
URLs that match the patterns are included in the index. URLs that don't match the patterns are excluded from the index. If a URL matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the URL file isn't included in the index.
Urls
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
public object Urls { get; set; }
Property Value
System.Object
Remarks
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
You can only crawl websites that use the secure communication protocol, Hypertext Transfer Protocol Secure (HTTPS). If you receive an error when crawling a website, it could be that the website is blocked from crawling.
When selecting websites to index, you must adhere to the Amazon Acceptable Use Policy and all other Amazon terms. Remember that you must only use Amazon Kendra Web Crawler to index your own webpages, or webpages that you have authorization to index.